UCPSO: A Uniform Initialized Particle Swarm Optimization Algorithm with Cosine Inertia Weight

The particle swarm optimization algorithm (PSO) is a meta-heuristic algorithm with swarm intelligence. It has the advantages of easy implementation, high convergence accuracy, and fast convergence speed. However, PSO suffers from falling into a local optimum or premature convergence, and a better performance of PSO is desired. Some methods adopt improvements in PSO parameters, particle initialization, or topological structure to enhance the global search ability and performance of PSO. These methods contribute to solving the problems above. Inspired by them, this paper proposes a variant of PSO with competitive performance called UCPSO. UCPSO combines three effective improvements: a cosine inertia weight, uniform initialization, and a rank-based strategy. The cosine inertia weight is an inertia weight in the form of a variable-period cosine function. It adopts a multistage strategy to balance exploration and exploitation. Uniform initialization can prevent the aggregation of initial particles. It distributes initial particles uniformly to avoid being trapped in a local optimum. A rank-based strategy is employed to adjust an individual particle's inertia weight. It enhances the swarm's capabilities of exploration and exploitation at the same time. Comparative experiments are conducted to validate the effectiveness of the three improvements. Experiments show that the UCPSO improvements can effectively improve global search ability and performance.


Introduction
Since the particle swarm optimization algorithm (PSO) was proposed by Kennedy and Eberhart in 1995 [1], it has obtained great achievements in finding the optimal value of continuous nonlinear equations [2]. PSO is a special branch of evolutionary algorithms. It adopts social learning among swarms and the self-cognition of individuals to replace the common theory of evolution algorithms (EAs)-survival of the fittest [3]. e bionics mechanism of PSO gives PSO swarm intelligence [4] and enables PSO to imitate the complex search behaviour of swarms, such as bird swarms, fish schools, and ant colonies. Different from EAs, PSO has a simpler iteration mechanism and fewer control parameters [5]. erefore, it is widely used in practical engineering areas. For example, PSO is usually applied to image processing [6], parameter optimization [7,8], scheduling optimization [9], clustering [10], and price forecasting [11].
One of the main advantages of PSO is its easy implementation [12]. A large number of numerical experiments also prove that PSO has high convergence accuracy and a fast convergence speed [13]. Consequently, researchers have carried out numerous studies on PSO. However, some limitations of PSO have been found during long-term research work. When facing complex functions, PSO is troubled by falling into a local optimum or premature convergence [14]. Meanwhile, the performance of the original PSO is inadequate and must be improved. To solve these problems, many researchers have proposed improvements.
ese improvements mainly focus on PSO parameters, particle initialization, and population topology.
Inertia weight is a very important parameter in the PSO algorithm. It controls the balance between the two critical behaviours of PSO: global and local search. Researchers have created various forms of inertia weight. ese forms of inertia weight enhance the performance of PSO. Shi and Eberhart proposed a parameter called inertia weight ω in PSO to balance exploration and exploitation [15]. e appearance of ω created a new way to improve the performance of PSO. en, Eberhart and Shi introduced a linear decreasing inertia weight [16]. is linear decrease in inertia weight greatly improves the comprehensive performance of PSO. It relieves the problem of falling into a local optimum. Recently, Tian et al. adopted a multistage strategy to refine the change process of inertia weight. ey split the curve of inertia weight into two stages to satisfy specific requirements. e variant proposed by them achieves excellent performance in the experiment [17]. e quality of the initial particles is related to the PSO results. Many works have been performed to distribute initial particles more dispersedly and make initial particles closer to the global optimum. Tian [18], Zhang [19], and Xu [20] adopted chaotic sequences for particle initialization to increase the diversity of initial particles. Chaotic initialization achieves certain success compared to random initialization under the same conditions. Rahnamayan [21] employed the symmetry strategy in swarm initialization. Symmetry initialization can prevent initial particles from being distant from the global optimum. DMPSO [22] combines chaotic initialization with opposition-based initialization. Experiments validate that the hybrid initialization can recognize the search area better. In addition, MCJPSO [23] randomly divides the entire search space and distributed particles over a search space in independent slots.
is semirandom initialization can overcome the limitation of the original PSO. Rauf et al. used the Weibull probability sequence to generate numbers at random locations for swarm initialization.
is method is able to enhance the diversity of swarms [24].
To enhance the global search ability and the comprehensive performance of PSO, a uniform initialized particle swarm optimization algorithm with cosine inertia weight (UCPSO) is proposed in this paper. UCPSO combines three effective improvements: an inertia weight in the form of a variable-period cosine function, uniform initialization, and a rank-based strategy for individual particle inertia weights. e cosine inertia weight introduced in this paper adopts the multistage strategy. It divides the change process of inertia weight into three stages. It can balance exploration and exploitation more specifically, help particles transform from global search to local search smoothly, and improve the convergence accuracy. Uniform initialization initializes a particle randomly as the basic point and then generates other initial particles based on this basic point. e initial particles are evenly distributed in each dimension, and the positions of each particle in each dimension are random. is mechanism can prevent the aggregation of initial particles. It distributes particles uniformly to recognize the search area more comprehensively. Uniform initialization is able to avoid falling into a local optimum and improve the search efficiency. In addition, this paper employs a rank-based strategy to adjust individual particle inertia weights. It makes the particles that are close to the swarm's best position focus on mining and makes particles that are far away from the swarm's best position keep exploring. It can enhance the global and local search ability of swarms at the same time.
In recent years, researchers have proposed some effective variants of PSO. Ye et al. proposed an improved multiswarm particle swarm optimization with dynamic learning strategy (PSO-DLS). It classified particles of each subswarm into ordinary particles and communication particles [25]. Lynn and Suganthan proposed the ensemble particle swarm optimizer (EPSO), which combines the characteristics of several PSO variants [26]. In EPSO, the best-performing algorithm for each generation can be determined by a selfadaptive scheme. Heterogeneous comprehensive learning particle swarm optimization (HCLPSO) divides the whole swarm into an exploration subpopulation and an exploitation subpopulation [27]. e CL strategy is used to breed learning exemplars for both of them. Gong et al. proposed genetic learning particle swarm optimization (GLPSO), which adopts selection, mutation, and selection [28]. By performing these operators on the historical information of particles, GLPSO is able to construct diversified and highqualified learning exemplars to guide the swarm. e purpose of designing UCPSO is to obtain a variant of PSO that has a good comprehensive performance and the ability to escape from a local optimum. In addition, three improvements in UCPSO should be easy to use. ey are introduced to help researchers improve the global search ability and performance of PSO. A large number of comparative experiments based on benchmark functions were used to validate the effectiveness of the UCPSO improvements.
is paper is organized as follows. Section 2 introduces the standard PSO and related research on inertia weight and particle initialization. Section 3 describes the UCPSO and the three improvements in detail. Experiments are presented in Section 4. e conclusion is given in the Section 5.

Particle Swarm Optimization Algorithms
2.1. Standard PSO. PSO is a stochastic algorithm based on population [15]. It finds the optimal solution in a given range by mimicking the behaviour of birds. e particles in the swarm are potential solutions, and n is the total number of particles in the swarm. Every particle remembers its current position X i , its current velocity V i , and the best position that it has ever been Pbest i (1 ≤ i ≤ n). e swarm also remembers the swarm's best position Gbest. e X i , Particles find the optimal solution through iterations. e position and velocity of particles are updated as follows: 2 Computational Intelligence and Neuroscience where t denotes the current iteration, t ≤ t max . In addition, . ω is the inertia weight. c1 and c2 are acceleration coefficients, which control the influence of Pbest i , and Gbest in the iteration. r1 and r2 are random numbers in the range 0, 1. PSO is usually terminated after reaching the allowed maximum number of iterations or meeting the stopping criterion. e best solution of a problem is the final Gbest. Figure 1 shows the concept of a particle's iteration in a graphical way. In equation (1), ω × v i d (t) represents the effect of inertia, represents the effect of social learning. e cooperation between them contributes to finding the optimal solution. e function used to evaluate the position of a particle is usually called the fitness function F(X i ). To prevent particles from exceeding the search area, the position and velocity of a particle are always limited in the allowed range. When a particle reaches the boundary of the search area, its velocity should be reversed to improve the search efficiency. e pseudocode of the standard PSO is shown as follows (Algorithm 1):

Different Forms of Inertia Weight.
e inertia weight reflects the influence of the previous velocity V i (t) on the new velocity V i (t + 1). A large inertia weight can prevent particles from going to the region of interest (hereafter called the ROI) immediately.
is makes particles continue to search outside the ROI for a period of time. A small inertia weight can make particles go to the ROI immediately and search in the ROI. at is, a large inertia weight enhances the global search capability (hereafter called exploration), and a small inertia weight enhances the local search capability (hereafter called exploitation). Exploration can prevent particles from falling into a local optimum, but it also leads to low convergence accuracy and a slow convergence speed. Exploitation can accelerate the convergence speed and improve the convergence accuracy, but it makes the algorithm converge prematurely or become trapped in a local optimum easily [29]. ese two functions greatly influence the performance of PSO. erefore, it is very important to choose an appropriate inertia weight.
Since inertia weight was proposed, many researchers have made contributions in this field. Some classical forms of inertia weight have been proposed, such as time invariant [15], linear time variant [16,30], nonlinear time variant [17,31], and other forms of inertia weight [32][33][34]. e famous forms of inertia weight mentioned above are described in detail in the subsections below.

Time Invariant Inertia Weight.
To improve the performance of the original PSO, Shi and Eberhart proposed a parameter called inertia weight ω to balance exploration and exploitation in 1998 [15]. First, the inertia weight appeared in the form of a constant. ey found that a large inertia weight facilitates exploration, while a small inertia weight facilitates exploitation. e recommended range of inertia weight is [0.9, 1.2]. e computational results showed that the overall performance of PSO was improved empirically: ω 0 (t) is easy to implement, so it has been widely used.

Linear Time Variant Inertia Weight.
After the concept of inertia weight was proposed, a linear time variant inertia weight was introduced in [16] to further improve the performance of the PSO algorithm. e mechanism of the linear time variant inertia weight ω 1 (t) is shown in the following equation: e initial value ω ini to the final value ω fin as the number of iterations increases. e linear time variant inertia weight takes the demands of particles in different periods into account.
ere are many other mechanisms of the linear time variant inertia weight. Specifically, Zheng proposed an increasing linear time variant inertia weight ω 2 (t)(ω ini < ω fin ). It is proven that the increasing mechanism performs better than the decreasing mechanism in some test functions [30]. e mechanism of inertia weight ω 2 (t) is

Nonlinear Time Variant Inertia
Weight. Based on the linear time variant inertia weight, some researchers think that the nonlinear mechanism is more suitable for the demand of particles. erefore, many nonlinear time variant mechanisms have been proposed. Chatterjee [31] introduced a nonlinear time variant inertia weight combined with a quadratic function, and its mechanism is For the sake of a better inertia weight, some researchers abandoned the continuous function and began to research x i (t + 1) Self-cognition Figure 1: e concept of a particle's iteration.
Computational Intelligence and Neuroscience the multiple stages of inertia weight. Tian [17] proposed a sigmoid increasing inertia weight ω 4 (t) and obtained an algorithm with satisfactory performance. e mechanism of inertia weight ω 4 (t) is as follows:

Other Forms of Inertia
Weight. Some researchers also proposed other effective strategies to adjust the inertia weight, such as random strategy, chaotic strategy, and selfadaptive strategy. Randomness is the natural property of PSO, and it is also the reason why PSO can be applied to almost all optimization problems. It is difficult to predict whether exploration or exploitation would be better during the iteration. To address this problem, researchers thought of using random strategies to adjust the inertia weight. A random inertia weight ω 5 (t) was introduced in [32]. e mechanism of inertia weight ω 5 (t) is shown in where rand is a random number in the range [0, 1], so 0.5 ≤ ω 5 ≤ 1. Feng [33] used a chaotic strategy to adjust the inertia weight and obtain a chaotic inertia weight ω 6 (t). He added a chaotic term z(t) to the linearly decreasing inertia weight. e mechanism of inertia weight ω 6 (t) is where the initial value of z(t) is a random number in the range [0, 1], and z(t) ≠ 0, 0.25, 0.5, 0.75, 1. Different from the random strategy and chaotic strategy, some researchers chose an index to adjust the value of inertia weight in real time. is kind of index can provide feedback on the state of the swarm. Zhang [34] et al. adopted an index φi to monitor the state of each particle, and they proposed a self-adjusted inertia weight ω 7 (t). e mechanism of inertia weight ω 7 (t) is shown in the following equations: where μ � 100. On the right side of equation (11), the numerator is the Euclidean distance from the position of the i th particle to Gbest, and the denominator is the Euclidean distance from the position of the i th particle to Pbest i . erefore, the index φ i can reflect the state of the i th particle dynamically. e performance of the self-adaptive inertia weight in some test functions is excellent, but the mechanism of the self-adaptive inertia weight is usually very complicated. It is difficult to design a widely used index.

Particle Initialization.
In addition, the quality of particle initialization is also critical to the performance of the PSO algorithm.
e volatility of particle initialization is the primary cause of the volatility of convergence speed and accuracy.

Random Initialization.
e original method of particle initialization is random initialization. e position of each dimension of each particle is distributed in the allowed range independently and randomly, as shown in the following equation: (1) Initialize n, c1, c2, ω, particles' position X and velocity V, then find Pbest and Gbest, t � 1; (2) while (t ≤ t max or the precision is not met) Update velocity v i d by equation (1); (6) Update position x id by equation (2);  Computational Intelligence and Neuroscience After initialization, if particles are close to the global optimum, PSO tends to have good performance; if the particles are concentrated near the local optimum, PSO tends to fail. e random strategy of particle initialization inevitably leads to volatility of initialization. However, if particle initialization abandons randomness, it is very difficult for PSO to solve various optimization problems without prior knowledge. Fixed initialization can only solve specific problems.

Chaotic Initialization.
Chaotic initialization employs chaotic sequences to make particles more scattered. A common chaotic sequence called a logistic map is widely used because of its simple employment [35]. Its mechanism is as follows: where Z N (1 ≤ N ≤ n) is the chaotic variable, Z 1 is a random value in the range 0, 1, and other chaotic variables are obtained by equation (14). To prevent chaotic variables from falling into a cycle, Z 1 ≠ 0, 0.25, 0.5, 0.75, and 1. a is a constant that controls the level of chaos, and the recommended range for a is [3.5699, 4]. In this paper, Z N is obtained by equation (14) for 10 iterations to make the initial swarm more chaotic. After n × D chaotic variables are obtained, the chaotic initialization can be completed by replacing the random matrix with chaotic variables during the process of initialization.

Opposition-Based Initialization.
For two particles that are symmetrical about the centre of the search area, one particle of the two is closer to the global optimum than the other (it is a special case that the two distances are equal). Opposition-based initialization generates a subswarm randomly and combines it with its symmetric subswarm. erefore, opposition-based initialization can avoid the situation in which all particles are far from the global optimum. Rahnamayan [21] introduced opposition-based initialization, and the mechanism of opposition-based initialization is shown in

UCPSO Algorithm
Based on research on inertia weight and particle initialization, UCPSO is proposed to be a competitive variant of PSO. UCPSO adopts three new strategies, and their details are represented in the following sections.

Inertia Weight in the Form of Variable-Period Cosine
Function. ere is a popular form of nonlinear time variant inertia weight. It maintains a large value in the early stage and keeps a small value in the final stage. In common optimization problems, it can enhance the global search ability of PSO. en, the multistage inertia weight was proposed. It puts forward more specific requirements for the change process of inertia weight: (a) initial stage: inertia weight keeps a large value for a period of time to carry out global search and reduces the probability of falling into local optimum (this stage is also called global search stage); (b) intermediate stage: inertia weight drops rapidly and transits from global search to local search (this stage is also called decelerating transition stage); (c) the final stage: inertia weight keeps a small value for a long time to help PSO converge to an accurate optimal solution quickly (this stage is also called the local search stage). e change process of the cosine function in the range [0, π] meets the requirements of the multistage inertia weight. In the range [0, π/6], the cosine function maintains a large value (≤ � 3 √ /2) and changes slowly. In the range [π/6, 5π/6], the cosine function declines rapidly. In the range [5π/6, π], the cosine function maintains a small value /2) and changes slowly. Because the cosine function is simple and easy to use, an inertia weight in cosine form is adopted in this paper. However, the original cosine function is not consistent with the requirements of the multistage inertia weight. erefore, the original cosine function needs to be adjusted. An iterative term I(t) is added into the cosine function to adjust the period and ω cos (t) is rescaled in the range [ω fin , ω ini ] as shown in the following equations: a � a 1 , where a is a constant that can adjust the period of ω cos (t). e values of a 1 , a 2 , and a 3 can control the length of each stage in ω cos (t) According to the requirements of ω cos (t), we limit the phase (I(t)π/t max ) in the range 0, π and (I(t)π/t max ) is required to increase from 0 to π. While t increases from 0 to t max I(t) needs to increase from 0 to t max . erefore, a 1 , a 2 , and a 3 have to satisfy the following equation: e pseudocode of updating ω cos is shown as follows (Algorithm 2): e parameter analysis experiment for a 1 , a 2 , and a 3 is shown in Section 4.2. e recommended configuration of a 1 , a 2 , and a 3 is a 1 � (4/3), a 2 � (16/3), and a 3 � (2/9). e curves of different inertia weights ω 0 (t) ∼ ω 7 (t) and ω cos (t)(a 1 � (4/3), a 2 � (16/3), a 3 � (2/9)) are displayed in Computational Intelligence and Neuroscience To make the inertia weight have the same value range, in these inertia weights, all ω ini are set to 0.9, and all ω fin are set to 0.4 (for ω 2 (t), ω ini � 0.4, ω fin � 0.9), ω 0 (t) � 0.9, and ω 5 (t) � 0.4 + rand/2.

Uniform Initialization.
ere is no clear mechanism in random initialization, chaotic initialization, and oppositionbased initialization to avoid the aggregation of initial particles.
is situation will lead some areas to be searched repeatedly and some areas to be ignored. is reduces the search efficiency and the possibility of finding a global optimum.
To solve this problem, a particle initialization method with both randomness and uniformity (called uniform initialization) is proposed in this paper. In Algorithm 3, Line 1 initializes a particle randomly to be the base point, and X 1 � [X 11 , X 12 , . . . , X 1 D ] is the position of the base point. Lines 2-4 generate a D × (n − 1) random matrix R � [R 1 , R 2 , . . . , R D ] to ensure the randomness of the initial particles. Lines 5-12 divide the length of each dimension by n to obtain the minimum distance between particles in the corresponding dimension. Lines 5-12 distribute particles in each dimension uniformly and avoid the aggregation of particles. If the position of a particle exceeds the allowed range of a dimension, it will subtract the range of the corresponding dimension. e pseudocode of uniform initialization is shown as follows: Uniform initialization ensures that the distances between particles are larger than a certain value. It can avoid the aggregation of particles and distribute initial particles uniformly to recognize more areas at the beginning. Figure 5 represents the result of uniform initialization. We find that the level of aggregation is low, and the distribution of particles is uniform. Uniform initialization is a good combination of randomness and uniformity. Figure 5 is completed under the configuration: n � 50, D � 2, the search area is a 4 × 4 rectangle.

Rank-Based Strategy for Individual Particle's Inertia Weight (RIW).
e forms of inertia weight mentioned above are all assigned numerical values according to the state of the whole swarm. However, in fact, particles' states are diverse. e individual particle's need may not follow the swarm's need. Particles that are already in the ROI need a small inertia weight to exploit. Particles that are far away from the ROI need a large inertia weight to explore. e single value of inertia weight cannot satisfy both requirements at the same time. Cooperation between these two kinds of particles can maximize the benefit of the whole swarm.
A rank-based strategy is adopted by this paper to solve the problem. Generally, the particles with small fitness values (1) Let a � a 1 ; (2) for t � 1 : t max (3) Update ω cos (t) by equation (16); a � a 2 ; (8) end

Computational Intelligence and Neuroscience
(F(X i )) are in the current ROI, while the particles with large F(X i ) are outside the ROI. Particles are sorted by fitness value from small to large. Adding a rank-based strategy to inertia weight can take both the overall and individual requirements into account simultaneously. e mechanism of the RIWs is shown in equations (20) and (21): where ω i denotes the i th particle's inertia weight, ω swarm denotes the swarm's inertia weight, b i is the adjustment factor for the i th particle's inertia weight, and rank i denotes the ranking of the i th particle according to the fitness value. e pseudocode of the RIWs is shown as follows (Algorithm 4): e parameter analysis experiment for b 1 , b 3 is shown in Section 4.2. e recommended configuration of b 1 is paper adopts the above three mechanisms in the standard PSO and proposes a uniform initialized particle swarm optimization with cosine inertia weight (UCPSO). To elaborate the mechanism of UCPSO, the pseudocode of UCPSO is shown as follows (Algorithm 5): (1) Initialize X 1 in the search area randomly by equation (13);    [36]. e details of f 1 − f 6 are specified in Table 1. f 1 − f 6 include 2 many-local-minima functions, 2 bowl-shaped functions, and 2 valley-shaped functions. erefore, f 1 − f 6 are representative. e experiment to compare UCPSO with PSO, MCJPSO [23], PSO-DLS [25], EPSO [26], HCLPSO [27], and GLPSO [28] and is based on CEC2020 benchmark functions, as shown in Table 2. e parameter configurations of the other six algorithms are set according to their original references, which are shown in Table 3.
e nonparametric Wilcoxon signed-rank test is used to examine the significant difference between algorithms. In this article, a Wilcoxon signed-rank test at a 5% significance level is used. A pairwise comparison is conducted over the results obtained through several runs. e symbol "+" indicates that the proposed algorithm performs significantly better than the compared algorithms. e symbol "�" indicates that the proposed algorithm is not significantly different from the compared algorithm. e symbol "− " indicates that the compared algorithm performs significantly better than the proposed algorithm. e criteria include the mean of the best solutions (Mean), the standard deviation of the best solutions (SD), success rate (SR) [37] and the average number of iterations (Average number). SR reflects the probability of obtaining a satisfactory result. To reduce the impact of extreme values, only the successful iterations are counted when calculating average number. When the algorithm iterates successfully, the current number of iterations will be recorded to calculate average number, and the algorithm will continue to iterate. Average number reflects the convergence speed effectively (1) while (t ≤ t max or the precision is not met) (2) Sort F(X) to get rank � [rank 1 , rank 2 , . . . , rank n ]; (3) for i � 1 : n (4) Update ω swarm ; (5) Update b i by equation (21); (6) Update ω i by equation (20); (1) Initialize n, c 1 , c 2 , ω, particles' velocity V and initialize particles' position X by Algorithm 2, then find Pbest and Gbest, t � 1; (2) Calculate ω cos by Algorithm 3; (3) while (t ≤ t max or the precision is not met) (4) Sort F(X) to get rank � [rank 1 , rank 2 , . . . , rank n ]; (5) for i � 1 : n (6) Update b i by equation (21); Update velocity v id by equation (1); (10) Restrict Update position x id by equation (2); (12) Restrict Pbest i � X i ; (19) if F(Pbest i ) < F(Gbest) (20) Gbest � Pbest i ; (21) End (22) End (23) End (24) t � t + 1; (25) End ALGORITHM 5: Pseudocode of UCPSO. 8 Computational Intelligence and Neuroscience when combined with SR. Whether the algorithm iterates successfully or unsuccessfully is judged according to the following content: where ε is the allowed maximum error. ε is often set as 0.001 in the engineering field. If |F(Gbest) − F(X * )| ≤ ε, the current iteration is successful; if |F(Gbest) − F(X * )| > ε, the current iteration is unsuccessful.

Parameter Analysis.
Based on the standard PSO with ω cos (t), 15 different configurations of a 1 , a 2 , and a 3 are compared on the benchmark functions f 1 ∼f 6 . In these configurations, the duration of each stage changes in the step size of t max /8. erefore, a relatively good parameter configuration can be obtained. All variants are tested on the experimental settings in Section 4.1.
In Table 4, when a 1 � 4/3, a 2 � 16/3, a 3 � 2/9, the best performance is obtained. In Figure 2, we can see that the curve of ω cos (t) with a 1 � (4/3), a 2 � (16/3), anda 3 � (2/9) satisfies the requirements of the multistage inertia weight. e inertia weights of particles that are ranked in the middle remain constant, so the value of b 2 is equal to 1. According to the requirements of the inertia weight of the particles, b 1 < 1, b 3 > 1. Particles that are already in the ROI need a small inertia weight to exploit. Particles that are far away from the ROI need a large inertia weight to explore. erefore, b 1 should be less than 1 and b 3 should be greater than 1. A minor adjustment of inertia weight enhances the cooperation between the swarm. If the inertia weight becomes too small, the diversity of the swarm will decrease.   Computational Intelligence and Neuroscience is situation will lead to premature convergence. If the inertia weight becomes too large, it will lead to low convergence accuracy because particles outside the ROI insufficiently participate in local search. e effects of b 1 and b 3 are relatively independent. erefore, the effects of different values of b 1 or b 3 on the standard PSO with RIWs are compared separately. e comparative experiments are based on benchmark functions f 1 − f 6 . All variants are tested on the experimental settings in Section 4.1.

Comparing Inertia Weight ω cos (t) with Other Forms of Inertia Weight.
To compare inertia weights ω 0 (t) ∼ ω 7 (t) and ω cos (t), they are added into the standard PSO. e configurations of ω 0 (t) − ω 6 (t) and ω cos (t) are elaborated in Section 3.1. In the inertia weight ω 7 (t), ω ini � 0.9 and ω fin � 0.4. For brevity, the standard PSO with inertia weight ω 0 (t) is abbreviated as PSO-ω 0 and so on. All variants are tested on the experimental settings in Section 4.1. Table 7 shows the performance of standard PSO variants with nine different forms of inertia weight. If the SR of a variant is 0, its average number will be omitted. We can find that PSO-ω cos has the highest SR in f 1 − f 5 . In f 6 PSO-ω cos still obtains the second highest SR. ω cos (t) maintains a large value for a period of time, so its global search ability is improved. PSO-ω cos has the smallest mean of the four benchmark functions and the smallest SD of the three benchmark functions. In f 5 and f 6 , PSO-ω 2 has the smallest mean and SD, but its SR is still lower than PSO-ω cos 's SR. ω cos (t) adopted a multistage strategy to meticulously guide the behaviour of particles at different stages. erefore, PSOω cos obtains a better convergence quality than other variants. PSO-ω cos has the fastest convergence speed in f 5 and a moderate convergence speed in the other five benchmark functions. Figure 6 shows the convergence characteristics of the nine variants. e convergence speed of PSO-ω cos is not very fast at the beginning, but PSO-ω cos converges to the smallest fitness value in all six benchmark functions. In f 4 and f 6 , PSO-ω cos outperforms the other variants. PSO-ω cos is able to avoid becoming trapped in the local optimum ω cos (t) converts to local search quickly and smoothly. ese factors help PSO-ω cos converge to an accurate solution.

Comparing Uniform Initialization with Other Particles
Initializations. Random initialization, chaotic initialization, opposition-based initialization, and uniform initialization are compared on the benchmark functions f 1 − f 6 . e four particle initializations are all added to the standard PSO-ω 1 to be compared. e configurations of random initialization, chaotic initialization and opposition-based initialization are elaborated in Section 2.3. For chaotic initialization, a is set to 4. Uniform initialization adopts the recommended configuration in Section 3.2. e four variants are abbreviated as PSO-rand, PSO-chaotic, PSO-opposition, and PSO-uniform for simplicity. All variants are tested on the experimental settings in Section 4.1.
From Table 8, it can be seen that PSO-uniform performs better than the compared variants. PSO-uniform obtains the smallest mean in five benchmark functions and the highest SR in four benchmark functions. If the initial particles gather around a local optimum, the iteration is very likely to converge prematurely. Uniform initialization reduces the possibility of this case. It distributes initial particles more uniformly and makes them more likely to approach the global optimum. is improves the SR and convergence quality of the PSO algorithm. Except for f 5 , PSO-uniform has at least the second smallest average number. In addition, it has the smallest Average number in f 1 . It can be concluded that PSO-uniform has a better global search ability and a better convergence speed in f 1 − f 6 .
From Table 9, it can be observed that UCPSO outperforms PSO and PSO-DLS in 20-dimensional CEC2020 benchmark functions. Except for F 1 , F 8 , and F 9 , UCPSO has better results than MCJPSO.
is indicates that the performance of PSO is enhanced by the proposed three improvements. HCLPSO achieves an excellent result in this experiment. In most benchmark functions, UCPSO can keep up with HCLPSO. In F 4 and F 10 , UCPSO performs almost as well as HCLPSO. In F 5 , UCPSO achieves the best result. In hybrid and composition functions, UCPSO obtains good results and outperforms GLPSO. is proves that the performance of UCPSO is very competitive and that the proposed three improvements are effective.
In Figure 7, the median convergence curves of the four types of benchmark functions are shown. We can see that UCPSO has good convergence performance, especially in the top 500 iterations. UCPSO converges fast and has a high level of convergence accuracy, especially in F 4 and F 5 . UCPSO can maintain a strong exploration and exploitation ability and converge to an accurate solution in a short time.

Algorithm Complexity.
is section will analyse the computational complexity of UCPSO. e computational cost of the original PSO involves the initialization (T ini ), evaluation (T eva ), velocity, and position update (T upd ) for each particle. D is the dimensionality of the search space, and t max is the allowed maximum number of iterations. e computational complexity of PSO can be estimated as T(D) � T ini + (T eva + T upd )t max � D + (D + 2 · D) t max � D (1 + 3t max ). erefore, the computational complexity of the original PSO is O(D · t max ) ω cos (t) can be calculated in advance, so it is used directly during the iteration process. Uniform initialization adds a process of sorting before the iteration begins. RIWs adds the process of  14 Computational Intelligence and Neuroscience sorting and assignment into each iteration. erefore, the computational complexity of UCPSO can be estimated as follows: T(D) � T ini + (T eva + T upd ) t max � (D + D · log(n)) +(D + 2 · D + log(n) + D)t max � D(1 + log(n) + 4t max )+ log(n) t max . Because the number of particles n is usually small, the computational complexity of UCPSO is O(D · t max )too.
An experiment to compare the computational complexity of UCPSO with other PSO variants is carried out. T 0 is the time to run the following codes: For i � 1: 200000, x � i + 5.5; x � x + x; x � x 2 , x � x * x; x � sqrt(x); x � log(x); x � exp(x); End.  where T 1 is the time to execute 40,000 evaluations of benchmark function F 1 by itself with 20 dimensions and T 2 is the mean time to execute the algorithm with 40,000 evaluations of F 1 with 20 dimensions over 30 times. e number of particles is 20.
According to Table 10, UCPSO spends the least time on F 1 apart from PSO. PSO-DLS is up to 2 times slower in terms of time than UCPSO. e computational complexity of UCPSO is lower than those of PSO-DLS, EPSO, and HCLPSO. erefore, UCPSO is a relatively fast PSO algorithm with competitive performance. e three improvements in UCPSO will not greatly increase the computational complexity.

Application to Real-World Problems.
In this part, UCPSO is applied to solve real-world engineering optimization problems. PSO and UCPSO are tested on P 1 and P 4 of the CEC2011 real-world optimization problems, as shown in Table 11. For each test problem, 30 independent runs are performed. e population size is set to 20, and the maximum number of iterations is set to 2000. e allowed maximum number of iterations is used as the termination criterion for all algorithms.

Optimal Control of a Nonlinear Stirred Tank Reactor.
is problem is a multimodal optimal control problem. It describes a first-order irreversible chemical reaction carried out in a continuous stirred tank reactor [38]. is chemical process is modelled by two nonlinear differential equations: where u is the flow rate of the cooling fluid, x 1 is the dimensionless steady-state temperature, and x 2 is the deviation from the dimensionless steady-state concentration. e fitness function of this problem is e initial condition is x 1 � x 2 � 0.09. e search range is unconstrained, but the initial range of u is [0, 5]. e dimension of this problem is 1.
As shown in Table 12, the best result of UCPSO is slightly worse than the best result of PSO, but the worst result of UCPSO is better than the worst result of PSO. In Table 13, the best and worst results of UCPSO are better than the best and worst results of PSO, respectively. According to the  Table 11: CEC2011 real-world optimization problems.
No. Problems D P 1 Parameter estimation for frequency-modulated (FM) sound waves 6 P 4 Optimal control of a nonlinear stirred tank reactor 1  16 Computational Intelligence and Neuroscience results on P 1 and P 4 , UCPSO has the ability to solve realworld engineering optimization problems.

Conclusion
In this paper, UCPSO is proposed to prevent PSO from falling into a local optimum and improve the comprehensive performance of PSO. It adopts a variable-period cosine inertia weight ω cos (t), uniform initialization, and rank-based strategy for individual particle inertia weights (RIWs). ω cos (t) can satisfy the requirements of inertia weight at different stages and balance exploration and exploitation better. Uniform initialization is able to avoid the aggregation of initial particles. RIWs increase the diversity of swarms and improves exploration and exploitation at the same time. e above three improvements enhance the global search ability of PSO and ensure a competitive comprehensive performance of UCPSO. Extensive tests based on benchmark functions validate the effectiveness of the improvements and the performance of UCPSO. In future work, the authors will perform more experiments to obtain a better configuration of parameters. We intend to apply UCPSO to practical engineering fields, such as clustering, parameter optimization, image segmentation, and industry scheduling. After that, we will continue to study other forms of inertia weight and research more effective improvements that are easy to implement.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.