An Improved Method of Particle Swarm Optimization for Path Planning of Mobile Robot

. The existing particle swarm optimization (PSO) algorithm has the disadvantages of application limitations and slow convergence speed when solving the problem of mobile robot path planning. This paper proposes an improved PSO integration scheme based on improved details, which integrates uniform distribution, exponential attenuation inertia weight, cubic spline interpolation function, and learning factor of enhanced control. Compared with other standard functions, our improved PSO (IPSO) can achieve better optimal results with less number of iteration steps than the diﬀerent four path planning algorithms developed in the existing literature. IPSO makes the optimal path length with less than 20 iteration steps and reduces the path length and simulation time by 2.8% and 1.1 seconds, respectively.


Introduction
Path planning is different from motion planning where dynamics must be considered. Its purpose is to find the optimal path of motion in the least amount of time and to model the environment completely [1]. For the path planning problem of mobile agents, several researchers have proposed many algorithms, which can be classified into two categories, i.e., traditional path planning methods and bionic intelligent algorithm-based methods. e former method includes the A * algorithm [2], Dijkstra [3], RTT [4], and artificial potential field method [5]. Bionic intelligent algorithms include differential evolution algorithm [6], genetic algorithm [7], ant colony algorithm [8], artificial fish swarm algorithm [9], and PSO [10]. PSO algorithm is widely used in the practical application and theoretical research of mobile agent path planning due to its strong searchability, fast convergence speed, and high efficiency [11]. e PSO is a population-based randomization technique to find the optimal value for the foraging of the birds. PSO has the advantages of fast search speed, memory, few parameters, and simple structure and has easier implementation at the stage of validation. Besides, its shortcomings include global search and local search imbalance, low convergence precision, easy falling into optimal local solution, and poor robustness.
To obtain better optimization results, a PSO optimization technique is proposed in [12] to converge at the global minimum, and a custom algorithm is used to generate the coordinates of the search space. e coordinate values generated by the custom algorithm are passed to the PSO algorithm, which uses these coordinate values to determine the shortest path between two given end positions. us, it is not limited to only finding the optimal value. Still, it can improve the speed of the algorithm. However, the direct transfer of its two-point coordinates is easy to fall into the local optimum, and the obtained optimal value is often considered as the global suboptimal value; the literature in [13] proposed a random disturbance method. e adaptive PSO optimization method introduces a disturbing global update mechanism to the optimal global position, which prevents the algorithm stalling. Besides, a new adaptive strategy is proposed to fine-tune the three control parameters in the algorithm. Moreover, the need for dynamic adjustment and exploration of the three parameters increases the computational complexity of the optimization algorithm as well as low execution efficiency. e work in [14] proposed a modified particle swarm optimization (MPSO) with constraints to jointly obtain a smooth path, but the method does not improve the efficiency of the particle diversity, which makes the particles stagnate and fall into the optimal local solution. e studies in [15] proposed a fusion of chaotic PSO and ant colony algorithm (ACO). e algorithm effectively adjusts the particle swarm optimization parameters. It reduces the number of iterations of the ant colony algorithm, called the chaotic particle swarm algorithm, thereby effectively reduces the search time. However, due to the improved algorithm parameters, its speed and the position updates are proportional, and this can limit the particle global searchability. Moreover, it is a solution that can fall into the optimal local solution.
e hybrid genetic particle swarm optimization algorithm (GA-PSO) is proposed in [16] that established a path planning. e mathematical model of the problem proposed a time-first based particle-first iteration mechanism, which makes the evolution process more directional and accelerates the development of path planning problems. However, the hybrid algorithm has too many parameters that need control. is increases the computational complexity and has reduced the execution efficiency of the algorithm. It signifies that it is difficult to improve the diversity of particles due to the easy way of falling into the optimal local solution. erefore, to combine the advantages of the abovementioned various improved algorithms and improve their defects, this paper proposed a new method to improve the PSO integration scheme based on improved details, which is used to solve the global path planning of mobile robots in the indoor environment. In the proposed algorithm, the navigation point model is selected as the working area model of the mobile robot, and the uniform distribution, exponential attenuation inertia weight, cubic spline interpolation function, and learning factor of enhanced control are introduced to PSO to improve its performance. Finally, the advancement and effectiveness of the standard test function and obstacle environment are verified in the paper. e results show that, compared with other standard functions, IPSO achieves better optimal results and more iteration steps. Compared with the other four path planning algorithms, IPSO reaches the optimal path length with less than 20 iteration steps and reduces the path length and simulation time by 2.8% and 1.1 seconds, respectively.

Classical PSO.
e fundamental core of the PSO method is to share the information through the individuals in the group so that the movement of the whole group can be transformed from disorder to order in the problem of solution space to obtain the optimal solution of the problem. e answer to each optimization problem is the "particle" speed and position update through (1) and (2). e first term of (1) indicates that the next move of the particle is affected by the magnitude and direction of the last flight speed; the second term means that the following action of the particle originates from its own experience; the third term indicates that the next move of the particle originates from the population for the best companion to learn. at is, the next step of the particle is determined by the experience and the best experience of the companion: where V k i d is the dth component of the flight velocity vector of the particle i of the kth iteration; X k i d indicates the dth element of the position vector of the particle i of the kth iteration; c 1 and c 2 represent the learning factor, which is used to adjust the learned to the maximum step size; r 1 and r 2 are two random functions with a range of value from 0 to 1 and are used to increase the randomness searching; and ω is the inertia weight to adjust the searchability for the solution space. Although the classical PSO is simple to implement and has few adjustment parameters when it is used in the path planning method, it is prone to poor searchability, falls into the optimal local solution, and has reduced particle diversity, low convergence precision, and low accuracy of path planning. erefore, this paper systematically improves the classical PSO algorithm by combining various improved methods.

Uniform Distribution.
We aim to ensure that the PSO algorithm does not lose the randomness of the particles when initializing the population and, at the same time, avoid the problem of excessive concentration of the initial positions of the particles, which is not conducive to global search and postprocessing, especially with the possibility that the search stagnation may occur in the late search. In this paper, continuous uniform random distribution is used to initialize the particles so that the particles are relatively uniformly distributed in the search space to facilitate later search and avoid particles falling into the local optimal solution. e formula for uniform position distribution is shown as follows: where x max and y max are the upper limits of the variables, x min and y min are the lower limits of the variables, and n is the number of the decision variables.

Exponential Decay Inertia
Weight. e change of the inertia weight w affects the position of the particle. e larger the value of w, the stronger the global searchability and the weaker the local mining ability. erefore, good results can be obtained when the values of the w are dynamic (adjustable) compared to the fixed values. e value of w can vary linearly during the PSO search process, or dynamically based on a measure function of PSO performance. By analyzing and summarizing the linear decreasing inertia weight proposed in [17], the improved particle swarm optimization algorithm based on the improved inertia weight proposed in [18], and the enhanced method of fuzzy inertia weight strategy proposed in [19], this paper presents an improvement in the above-mentioned methods. e fixed, predictable, and iterative step adopted in the previous ways is a global transformation that is not conducive to the particle, such as fixed transformation that can narrow the search range of the particle and affect the diversity of the particle. is paper uses the study in [20] to introduce the inertia weight of exponential decay which is more in line with the characteristics of the exponential function. By adopting the changes of the presteps and poststeps to the search area, its speed can be successfully updated at different periods. is eliminates the premature particles and improves the robustness of the algorithm. Henceforth, the global search and local mining ability are maintained, and this can solve the problems mentioned above to some extent. e expression is as follows: where ω min is the minimum weight, ω max is the maximum weight, as well as the current number of iterations, MaxIt is the maximum number of iterations, and it is the current number of iterations.

Improved Learning Factors.
e standard PSO expressed in (1) represents the acceleration weight of each particle moving to the optimal local value and the optimal global value. Lower values allow the particles to linger outside the target area, while the higher values are causing the particles to suddenly rush toward or cover the target area. Several methods have been effectively improved by the learning factor, although the learning factor and inertia weight weaken the uniformity of the optimization algorithm to a certain extent. Hence, it is not conducive to the global and local search algorithms. e work in [21] analyzes the PSO with a compression factor that considered the learning factor as a function of inertia weight. is kind of method has the advantage of transforming the three variables involving inertia weight and learning factor into one variable, which not only facilitates practical application but also enhances the uniformity of the algorithm optimization process. However, the function used is complicated, costly, and time-consuming. Besides, the possibility of particle stagnation in the later search is high, increasing the risk of particles falling into optimal local values. erefore, according to the characteristics of high self-learning ability in the early search period and high demand for "social learning ability," this paper adopts the dynamic method to improve the learning factor; its expressions are as shown in (7) and (8). It is shown that this expression is constrained by (9): e cosine function in (7) is a subtraction function in the interval (0, π/2), which has a significant value at the beginning of the search, and the value is decreasing in the latter part of the search. e sine function in (8) is an increasing function in the interval (0, π/2) with a small value before the quests, and its value increases after searching. Not only does it satisfy the condition that the PSO algorithm has better learning ability in the optimization process, but it also changes the inertia weight and the learning factor into one variable, which is convenient for practical application and strengthens the uniformity of the algorithm optimization process. Equation (9) is the guiding condition for selecting and adjusting the PSO parameters given in [22].

Improvement Based on Robot Dynamics Requirements
(1) Cubic Spline Interpolation. e experimental results show that (see Section 4.2 for details) the path planning of the improved PSO has more turning points with a rough path, which will affect the dynamic characteristics of the robot when it is moved. erefore, it is necessary to improve the aforementioned improved PSO algorithm further to have a smooth path with the improved robot dynamic adaptability of the algorithm requirements. e cubic spline curve is fitted by several interpolation intervals based on cubic polynomials to provide a smooth curve, defined as follows: On the range [a, b], n + 1 takes a node, and the node coordinates are (x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n ). If the following conditions are met, it is called a cubic spline function.
is a cubic polynomial: (ii) e function and its first and second derivatives are continuous at the interpolation point. (iii) z i (x) commonly uses endpoint conditions that can satisfy the following three requirements: (a) Free boundary: the second derivative at the endpoint is zero. (b) Fixed limitation: the range value of the differential function from the beginning to the end is specified. (c) Nonnode boundary: the third derivative at the 2nd to the last node is continuous.
(2) Particle Coding. Particle coding is the assignment of coordinate positions to several path nodes. e path node is the intersection of any two cubic spline intervals. Path nodes can be selected arbitrarily or according to the environment. Suppose that the coordinates of n path nodes are (x n1 , y n1 ), (x n2 , y n2 ), ..., (x nn , y nn ). e start and end coordinate of the trail are (x s , y s ), (x t , y t ). Interpolating the coordinates of the interpolation points with cubic spline interpolation is on the interval of (x 1 , y 1 ), (x 2 , y 2 ), . . . , (x m , y m ). e path of the particle code planning is the connection of the interpolation points. is paper uses the path node, the interpolation point, and the link of the start and endpoints of the path as the running trajectory of the robot.
(3) Evaluation Function. is paper provides the shortest way that does not intersect with the obstacle. e design evaluation function of our proposed algorithm is as shown in (11), where L is the path length of the mobile agent from the ith path point to the i + 1th path point, which is often used as an index in the path planning, and its mathematical expression is given in (12); x i and y i are the coordinates of the ith path point; x i+1 and y i+1 are the coordinates of the i + 1th path point; α is the weight coefficient and is set to 100 here, which is used to exclude the illegal path, i.e., the way through the obstacle; and P is a penalty function of obstacle avoidance constraint in the selected indoor environment model, which is used to set up a safe distance. e calculation formula is as shown in (13), where R m is the radius of the mth obstacle, m is the number of obstacles, and c and d are the center coordinates of the obstacle; the smaller the value of P, the higher the safety coefficient of the final path. If the trail passes the mth obstacle, it is a number greater than 0. If the obstacle is avoided, then the value is 0:

Improved Algorithm Flow
is section provides the detailed step by step explanation of how to improve the PSO algorithm, as stated in Section 2.
Step 1: the number of path nodes, together with the number of interpolation, is determined according to the actual environment.
Step 2: set the parameters of the particle uniformly and initialize the particle position using (3) and (4), respectively. en, the particle population and velocity are initialized.
Step 3: compute the coordinates of m interpolation points in x and y directions of each particle.
Step 4: calculate the fitness value of the particle according to (11).
Step 5: update the velocity and position of the particle according to (1), (2), and (5) to (9), respectively. Update the local optimal value Pbest and the global optimal value Gbest.
Step 6: determine whether the updated particle intersects the obstacle according to (13), and obtain a path composed of the path node coordinates after updating. e number of iterations is increased by one.
Step 7: if the termination condition is met (the threshold error is good enough to be negligible.), then the algorithm ends, and the optimal path is output. Otherwise, it goes to Step 3 and repeats the procedure. e flowchart of the algorithm is shown in Figure 1.

Simulation Experiment and Result Analysis
To verify the effectiveness and feasibility of the improved algorithm, the improved algorithm of this paper (denoted as IPSO), classical PSO algorithm (referred to as PSO), literature [23] stochastic inertia weight PSO algorithm (indicated as Rand PSO), literature [24] trigonometric learning factor PSO algorithm (denoted as TFPSO), and research [25] PSO with contraction factor algorithm (recorded as NCFPSO), the optimal values of the typical functions are compared and analyzed through path planning experiment. e simulation experiment environment is carried out on Windows 10, core i7, CPU (2.2 GHZ), memory 8 GB, Matlab (2018a).

Standard Test Function
Optimization. At present, several researchers work on intelligent bionic algorithms to evaluate and compare the performance of algorithms by finding the optimal values for the five typical functions. e Ackley function is generally used to detect the global convergence rate of the algorithm, the Rastrigin function is used to find the optimal global value of the algorithm, and the Griewank function detects the ability of the algorithm to jump out of the optimal local value. e Sphere and Rosenbrock functions are unimodal functions that are used to solve optimal global solutions.
In this paper, the above five functions are used as objects to compare and evaluate the effectiveness of our proposed algorithm.
e basic mathematical of the basic functions properties are shown in Table 1.

Standard Test Function and Parameter Setting.
is section compares the performance of our proposed algorithm with some of the developed algorithms in the current literature. e maximum number of iterations of each algorithm is set to MaxIt � 1000, the population size is set to Npop � 50, the dimension is set to D � 30, and the lower and upper limit of the dynamic inertia weight are set as w min � 0.4 and w max � 0.7, respectively. e five algorithms were run 20 times, and the optimal values, mean values, standard deviations, and average simulation run times are computed based on the experimental results for each algorithm, respectively.
e performance of our proposed (IPSO) algorithm was evaluated using the experimental results of these four aspects and the iterative curve.

Comparative Analysis of Algorithm Simulation Results, Standard Function Test, and Result Analysis.
e results for the five standard functions test curves are shown in Figure 2. e test curve for the Ackley function is shown in the subgraph of Figure 2(a). e Ackley function is an ndimensional function with several local optimal values. erefore, it is difficult to find the optimal global value, but it can be seen from the test curve in the subgraph of Figure 2(a) that our proposed IPSO algorithm has the fastest convergence and highest precision among all the four algorithms developed in the literature. e subgraph depicted in Figure 2(b) is the iterative curve of the Griewank function. Griewank function is a typical nonlinear multimode stating function with a large search space and has many local advantages. As can be seen from the graph shown in Figure 2(b), our proposed IPSO algorithm can obtain the optimal value better than the other developed algorithms in the existing literature, and its Output optimal path Start Determine the number of path nodes and interpolation Set and initialize the various parameters, and uniformly initialize the position of the particles according to equations (3) and (4) Calculate the coordinates of the interpolation point in the x and y directions Calculate the individual fitness value according to equation (11) Update the speed and position of Pbest and Gbest according to equations (1) and (2) and (5)     Journal of Control Science and Engineering convergence speed is fast. Its optimal value is received at the iteration step of less than 120. is shows that our proposed algorithm has better local optimal value bounce ability. e subgraph shown in Figure 2(c) represents the iterative curve of the Rastrigin function. Rastrigin function, which is similar to Griewank function, is a multipeak function. Its algorithm is easy to fall into the local maximum with an excellent solution. From the subgraph of Figure 2(c), our proposed IPSO algorithm approaches the optimal global value in a shorter iterative step when compared with the other four algorithms existing in the literature. Our proposed algorithm converges at 110 iteration steps, which verifies its ability to jump out from the optimal local value. e subgraph in Figure 2(d) is the iterative curve of the Rosenbrock function. e global advantage of the Rosenbrock function lies in a smooth and narrow parabolic valley. e function optimization algorithm provides limited information, and it is challenging to distinguish the search direction. is implies that it is challenging to solve optimal global value. Nevertheless, it can be seen from the iteration curve that our proposed IPSO algorithm can find the optimal global value in less than 60 iterations, which is the fastest among the compared four existing algorithms in the literature. is indicates that our proposed algorithm has better global searchability. e subgraph shown in Figure 2(e) is the iteration curve of the Sphere function. From Figure 2(e), our proposed algorithm has shown a certain advantage in terms of convergence speed and convergence precision that are not obvious. Still, the optimal value obtained by our proposed algorithm is the smallest, because the Sphere function is a unimodal function with a unique global minimum value, which does not make it difficult to find the optimal value. Hence, the degree of discrimination of the algorithm is not obvious. Table 2 presents the performance comparison for the test result data obtained by the standard functions Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and our proposed algorithm for the verification. It can be seen from Table 2 that, for the five functions, except for the Ackley function, the best result is the same as the other four comparison algorithms. However, the other four functions are optimized by our proposed algorithm, and the results are improved by at least 45%. After the optimization of the five functions by the proposed algorithm in this paper, the average value is increased by at least 0.1%, and the standard deviation is increased by at least 0.2%. e experimental data shows that the IPSO algorithm is superior to the other four comparison algorithms in terms of optimal value, average value, and standard deviation. However, the average running time is more than that of the classical PSO.
is is because the function of the parameters of our proposed algorithm is improved, while the classical PSO improved the parameters by constant values. Moreover, our proposed algorithm will increase the function to some extent. e limitation of the IPSO algorithm in this paper is being time-consuming. Our future studies would further consider the issue of achieving a reduced time. Although the IPSO algorithm of this paper optimizes the Rastrigin and Sphere functions with the slower convergence speed at the beginning, as the optimization process continues, the IPSO algorithm in this paper performs better than the other four algorithms at the middle and later stages.

Experimental Environment and Parameter Setting.
e IPSO algorithm of this paper uses the navigation point model of [26] to construct the experimental environment model and the obstacle expansion treatment. e experimental environment is divided into a simple environment and a complex environment. e number of simple environmental obstacles is set to 5, the number of path nodes is 3, and the number of interpolation points is set to 100. Similarly, the number of complex environmental obstacles is set to 11, the number of path nodes is 5, and the number of interpolation points is set to 100. e boundary is a nonnode boundary.
Simulation parameter selection: the population size and the maximum number of iterations of the five algorithms are consistent: Npop � 150, MaxIt � 100; w min � 0.4, and w max � 0.9.

Path Planning and Algorithm Performance Analysis
(1) Simple Environment. In a simple environment, the starting point (■) is the map coordinate system of (0, 0), and the endpoint (★) coordinates are (9,9), as shown in Figure 3(a).
It can be seen from Figure 3(a) that the path of the IPSO algorithm is smoother and the dynamics of the robot motion is improved; the path of the classical PSO planning does not find the optimal solution at the beginning of optimization and encounters stagnation along its path. It should be easy for the algorithm to fall into the optimal local solution. e ability to find the optimal global solution is weak due to poor particle diversity with small disturbance. e TFPSO algorithm starts to search for a short stagnation phenomenon. Finally, the localized optimal solution is obtained by the disturbance of the algorithm, which signifies that the algorithm has a weak ability to jump out from the optimal local solution. Figure 3(b) shows that our proposed IPSO algorithm has the fastest convergence rate and converges at 13 iteration steps; the RandWPSO algorithm converges at around 20 iterative steps; the TFPSO converges at the 48th iteration; the NCFPSO algorithm converges at the 14th iterative step; the classical PSO converges at the 34th iteration. Table 3 provides the performance comparison based average path length and the average simulation run time for the five algorithms. As compared with the other four algorithms, the longest path length of our proposed algorithm is reduced by at least 3.3%, the shortest path length is reduced by at least 2.9%, the average path length is reduced by at least 5.2%, and the average simulation time is reduced by 1.2 s.    (2) Complex Environment. To verify the universality of the experiment, two types of experiments were carried out. e first experiment is carried out from the starting point to the endpoint, while the second experiment is conducted from the endpoint to the starting point. e first type of experiment: the starting point (■) is the map coordinate system (0, 0), and the endpoint (★) coordinates are (9,9), as shown in Figure 4(a).

Journal of Control Science and Engineering
It can be seen from Figure 4(a) that the path planning of our proposed algorithm is not only the shortest but the smoothest. e path planning of a complex environment is relatively concentrated. is is because there are many obstacles in the complex environment with fewer paths to choose from. e classical PSO and TFPSO appear to be stagnant at the beginning of the algorithm, and the IPSO algorithm is more prominent. An excellent solution is achieved after the algorithm is optimized for a while. e optimal local solution is jumped out after its disturbance is settled. It shows that two algorithms (i.e., classical PSO and TFPSO) have small disturbances, few particle diversity, and weak ability to jump out from the local optimal solutions. Figure 4(b) presents the results obtained from the complex environment; from the iterative curves shown in Figure 4(b), the IPSO algorithm has the fastest convergence rate and the highest convergence precision with 22 iteration steps. e NCFPSO algorithm has 27 iterative steps. e RandWPSO algorithm has 28 iterations before it starts to converge. TFPSO iterates 39 times before the left and right algorithms start converging; the PSO algorithm starts to converge at around 93 iterative steps. Table 4 compares the path lengths of five algorithms in the first type of complex heterogeneous environment. Table 4 shows that the shortest path of the IPSO algorithm is reduced by at least 3% compared with the other four algorithms, the average path is reduced by at least 2.1%, and the average simulation running time is reduced by 1.1 s. Furthermore, path planning has higher accuracy. e second type of experiment: the starting point (■) is the origin of the map with the coordinate system of (9,9), and the endpoint (★) coordinates are (0, 0), as shown in Figure 5(a).
By considering Table 5, the performance of the path planning of our proposed algorithm in this paper is as follow: the average path length is reduced by at least 5.2%, the shortest path length is reduced by at least 1.5%, the longest path length is reduced by 7.2%, and the average simulation running time is reduced by 1.2 s.
In summary, it is verified that our proposed IPSO algorithm has the shortest path, the highest precision, and least processing time when compared with the four algorithms existing in the literature.

Summary of Path Planning.
In the simple environment, the path planning by the five algorithms is relatively scattered. is is because there are fewer obstacles and ample space with many paths available for the algorithm to select. When comparing the first type of experiment with the second type experimental path planning diagram in a complex environment, the five types of experiments in the first type of experiment are more concentrated when the robot starts running, and appear to be more dispersed in the later stages. is is because the first type of experiment is near the starting point. In the presence of obstacles, the algorithms avoid the obstacles according to their optimization ability. At the later stages of the optimization, the obstacles at the endpoint are sparse, thereby making the path more concentrated. e second type of experiment is the opposite of the first type. e three-path planning experiment is carried out to verify the effectiveness of our proposed IPSO algorithm among the algorithms existing in the literature. e performance comparison for the algorithms is based on the longest path, the shortest path, the average path, and the average simulation running time. In all the comparisons carried out, our proposed IPSO algorithm has performed better than the other four algorithms in the literature. is is because the algorithm proposed in this paper introduces a uniform distribution initialization strategy. Furthermore, the inertia weight and the learning factor interact in the optimization algorithm, which reduces the parameters of the proposed algorithm. Increasing the uniformity of the improved algorithm can better balance the global search of the algorithm. e introduction of the local optimization and exponential decay for inertia weight improves the searchability of our proposed algorithm and increases the disturbance of our algorithm, which subsequently improves the diversity of the particles.

Conclusion
In this paper, particle swarm optimization (PSO) and cubic spline interpolation are combined to solve the minimum of five test functions and robot path planning problems. In view of the problems of the classical particle swarm optimization algorithm, such as poor searchability, low convergence accuracy, easy falling into local optimal solution, poor robustness, and poor path smoothness, the classical     particle swarm optimization algorithm is improved from the following aspects: (1) e uniform initialization strategy is adopted for the population to improve the later searchability of our IPSO algorithm. is prevents the algorithm from falling into the local optimal solutions, because the random initialization of the particles is not evenly distributed, which is not conducive to the searchability at the later stages. (2) e introduction of the exponential decay for inertia weight makes the particles grow up in the early stage of the search and is beneficial to the global search. e particle step size in the later stages of the search is small in the local development, and the optimization accuracy is high. Experimental results show that the method increases the disturbance and diversity of particles to a certain extent. (3) e use of sine and cosine function can control the independent variable as the inertia weight. e learning factor makes the three variables become one variable; this reduces the parameters of the improved algorithm and thus reduces the complexity of our algorithm. e interaction of inertia weight and learning factor is increased, the unity of algorithm optimization is improved, and the performance of the algorithm is improved to a certain extent. (4) e evaluation function is constructed, and a smooth path is planned with cubic spline interpolation, which improves the accuracy and dynamic characteristics of the path. However, when compared with the classical PSO algorithm, the time advantage of the proposed algorithm is weak, which will be a crucial issue to be further studied in future research. Nevertheless, this does not affect the competitiveness of the improved algorithm. Besides, in the follow-up study, virtual obstacles will be used for the simulation experiment of multirobot path planning, and it will be used in the real environment of an operating system with hardware platform as turnabout and software platform as ROS (robot operation system), to improve the practicability of the algorithm.

Conflicts of Interest
e authors declare that they do not have any conflicts of interest.