A Hybrid Dynamic Probability Mutation Particle Swarm Optimization for Engineering Structure Design

Particle swarm optimization (PSO) is a common metaheuristic algorithm. However, when dealing with practical engineering structure optimization problems, it is prone to premature convergence during the search process and falls into a local optimum. To strengthen its performance, combining several ideas of the differential evolution algorithm (DE), a dynamic probability mutation particle swarm optimization with chaotic inertia weight (CWDEPSO) is proposed.+emain improvements are achieved by improving the parameters and algorithmmechanism in this paper.+e former proposes a novel inverse tangent chaotic inertia weight and sine learning factors. Besides, the scaling factor and crossover probability are improved by random distributions, respectively. +e latter introduces a monitoring mechanism. By monitoring the convergence of PSO, a developed mutation operator with a more reliable local search capability is adopted and increases population diversity to help PSO escape from the local optimum effectively. To evaluate the effectiveness of the CWDEPSO algorithm, 24 benchmark functions and two groups of engineering optimization experiments are used for numerical and engineering optimization, respectively. +e results indicate CWDEPSO offers better convergence accuracy and speed compared with some well-known metaheuristic algorithms.


Introduction
Recently, an increasing number of fields, such as feature selection [1], artificial intelligence [2], and engineering structure design [3], have been faced with optimization problems due to the development of science and technology and industrialization level development. Among them, the engineering structure optimization problem is difficult to be solved by a specific algorithm because of extensive and complex constraints. Since the traditional optimization methods take too much time to solve these problems, thus making it difficult to find the optimal solution, people have started researching new optimization algorithms with a broader applicability. In the past two decades, physical or biological phenomena have provided inspiration. Researchers have consequently invented many optimization algorithms, like the whale optimization (WOA) algorithm [4], biogeography-based optimization (BBO) [5], the sinecosine algorithm (SCA) [6], the moth-flame optimization (MFO) algorithm [7], ant colony optimization (ACO) [8], krill herd (KH) algorithm [9,10], the artificial bee colony (ABC) [11], the gravitational search algorithm (GSA) [12], monarch butterfly optimization (MBO) [13,14], earthworm optimization algorithm (EWA) [15], elephant herding optimization (EHO) [16,17], moth search (MS) algorithm [18], slime mould algorithm (SMA) [19], Harris hawks optimization (HHO) [20], differential evolution (DE) algorithm, [21] and particle swarm optimization (PSO) [22,23]. ese optimization algorithms are also known as metaheuristic algorithms and are now widely used in real life.
PSO is considered as a swarm intelligence algorithm put forward by the American computer intelligence researcher Eberhart and psychological researcher Kennedy in 1995 [22] and is derived from the foraging behavior of bird populations. PSO is not only robust and efficient but also easily implemented. erefore, PSO is widely used in practical engineering optimization. However, it is susceptible to premature convergence and falls into local optimal solutions when dealing with complicated multimodal optimization or multiconstraint conditions. To overcome these shortcomings, several scholars have proposed various PSO variants. Liang et al. made an introduction to comprehensive learning PSO (CLPSO) [24], which combined an original learning method. Liu and Gao proposed chaos PSO (CPSO) [25], which improved the performance of PSO by introducing chaos mechanism. Wang et al. proposed an adaptive granularity learning distributed particle swarm optimization (AGLDPSO) [26] with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). To avoid premature convergence, Jordehi proposed a time-varying acceleration coefficient PSO (TVACPSO) [27]. A kind of EC algorithm called local version particle swarm optimization (PSO) is adopted by Li et al. to implement a pipeline-based parallel PSO (PPPSO, i.e., P 3 SO) [28]. Fan and Jen attempted to develop the basic PSO effectiveness and convergence by adopting the thought of multiple cooperative swarms, and then, an enhanced partial search PSO (EPS-PSO) [29] was proposed. Ma et al. proposed the chaotic PSO-arctangent acceleration coefficient algorithm (CPSO-AT) [30], which strengthened the effectiveness of the original PSO through chaotic initialization and nonlinear optimization. Zhang et al. proposed a cooperative coevolutionary bare-bones particle swarm optimization (CCBBPSO) with function independent decomposition (FID), called CCBBPSO-FID [31]. Liu et al. proposed a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization (CPSO) [32]. Xia et al. proposed a triple archives PSO (TAPSO) [33]. Although the convergence accuracy of these algorithms is higher compared with the basic PSO, their convergence and robustness are still unsatisfactory.
Similar to PSO, DE is also a typical swarm intelligence algorithm and possesses strong points of simple rules, excellent robustness, quick convergence speed, and easy implementation. Nevertheless, the individual convergence is too strong, which makes DE very easy to be attracted by local interference extreme values in the late evolution, leading to precocity. For the purpose of overcoming such issues, many scholars have done many works on adaptive aspects of DE, such as self-adaptive DE with developed mutation method (IMSaDE) [34], multipopulation ensemble DE (MPEDE) [35], self-adapting control parameters in DE (jDE) [36], and self-adaptive DE (SaDE) [37]. In addition, Zhao et al. proposed an LBP-based adaptive DE (LBPADE) algorithm [38]. Wang et al. proposed a new automatic niching technique based on the affinity propagation clustering (APC) and design a novel niching differential evolution (DE) algorithm, termed as automatic niching DE (ANDE) [39]. Zhan et al. developed a novel double-layered heterogeneous DE algorithm and realized it in cloud computing distributed environment (CloudDE) [40]. Wang [42]. Zhan et al proposed an adaptive DDE (ADDE) [43] to relieve the sensitivity of strategies and parameters. Chen et al. developed a distributed individual differential evolution (DIDE) algorithm [44] based on a distributed individual for multiple peaks (DIMPs) framework and two novel mechanisms. e most classical one is adaptive DE with the optional external archive (JADE) [45] put forward by Zhang and Sanderson, which uses a Cauchy and Gaussian distributions to dynamically update and improve population convergent speed to the optimal value. Although these modifications are better than traditional DE, the parameter improvement depends too much on previous historical experience, and there are many problems such as slow convergence.
To overcome the limitations of a single algorithm, scholars have combined PSO and DE and conducted much research on this kind of hybrid algorithms, such as selfadaptive mutation DE based on PSO (DEPSO) [46], ensemble PSO and DE (EPSODE) [47], combining DE and PSO (DEPSO) [48], and hybridizing PSO with DE (PSO-DE) [49]. e existing hybrid algorithms can be roughly classified as follows: two algorithms are used in sequence, two algorithms solve and communicate with two different populations, respectively, and two algorithms evaluate and select optimal values. ese hybrid algorithms inherit the advantages of each algorithm, which is able to promote basic PSO and DE optimization performance. However, the requirements for more computing resources and slower convergence speed are still the main problems. Based on the above, we combine some ideas of DE and propose a dynamic probability mutation particle swarm optimization with chaotic inertia weight (CWDEPSO), which improves the parameters of PSO and DE. It adds chaotic processing to the inertial weight. Once it is found that PSO cannot successfully update the optimal value, a different mutation operator is adopted for helping PSO run away from local extrema, thereby avoiding the premature maturity of the algorithm and enhancing the population distribution entropy that aims at achieving the overall balance between the local search capability and the global search capability. Among them, the distribution entropy of the population represents the disorder degree of the population's species and the population diversity [50]. Generally, individuals are scattered in the early stage of iteration, so the degree of disorder is high, and the population diversity is high. At the end of the iteration, individuals are concentrated in the area with strong fitness, so the degree of disorder is low, and the population diversity is low. e population diversity can be improved by adding a mutation operator [51]. e contributions of this paper are as follows: (1) e paper proposes a different mutation operator DE/current-to-gbest/1 and a novel dynamic probability mutation strategy, and the former is applied to the latter.

Mobile Information Systems
(2) In the selection stage, to fully exploit the individual value in the population, an original selection strategy is proposed, which is to introduce a candidate population to reuse the culled individuals. (3) e paper introduces a method of using logistic chaotic maps, inverse tangent, and cosine trigonometric functions and random distributions to optimize the parameters, respectively. (4) CWDEPSO is applied to optimize the actual engineering structure. CWDEPSO proposed in this paper is used to optimize the spring tension structure and the pressure vessel structure.
24 benchmark test functions and two groups of engineering optimization experiments are adopted for testing the algorithm to evaluate its performance. e results suggest that the performance of CWDEPSO is better compared with PSO [22], DE [21], AIWCPSO [52], MPSPSO-ST [53], JADE [45], SinDE [54], and DEPSO [55] and five common metaheuristic algorithms GSA [12], ABC [11], MFO [7], SCA [6], and BBO [5]. Besides, it achieves the best performance in practical engineering optimization problems. e remaining arrangements of the article are shown below. e second section is the introduction to the basic DE and PSO. e third section introduces the improvement of the parameters, and the fourth section presents the monitoring mechanisms and specific processes of CWDEPSO. e experimental part is shown in the fifth section. e sixth section uses CWDEPSO to solve the engineering structure optimization problem. Furthermore, the final section is the conclusion.

PSO and DE Algorithm
Two types of algorithms, including DE and PSO, are introduced in this section.

Particle Swarm Optimizer (PSO).
As a metaheuristic algorithm, the position of each particle is the representative of a candidate solution within the PSO algorithm, and the fitness of solution relies on the fitness function. e search method is completed by iteration. During the iterative process, the individual particles have a memory function, which can record the best position X pbest they have passed in the search procedure and combine the optimal position X gbest found by the entire population for refining their position X G i and speed V G i , i � 1, 2, . . . , NP. e NP particles are searched for the optimal solution in D-dimensional space by iteration. Within the i-th particle of the G-th generation, the position and velocity have been represented by vectors , respectively. In the i-th particle of the current G generation, the greatest position that has undergone is pbest G i,d . In addition, the best position explored by the whole particle swarm is gbest G d . At the (G + 1)-th iteration, the D-dimensional vector of the i-th particle velocity and position is able to be updated as follows: where ω is the inertial weight; r 1 and r 2 are the random values in the interval [0, 1]; and c 1 and c 2 are the learning factors, with c 1 � c 2 � 2. Shi and Eberhart [56] proposed a well-known PSO variant, which introduced a linearly decreasing inertial vector to balance well global and local searches, as shown in Figure 1. e definition of the update mechanism can be shown as follows: where G max and G have been the iterations' maximum number and the current iterations' number, respectively.

Differential Evolution (DE) Optimizer.
Being a random search method based on population, DE is divided into three steps: mutation, crossover, and selection to solve optimization problems. DE is adopted worldwide within various fields due to its simple principle, strong robustness, and few control parameters. is part mainly introduces the specific steps of the DE.

Population Initialization.
Suppose there is a population containing NP individuals, and all the individuals can be represented by the vectors X i � (x i1 , x i2 , . . . , x i D ), and D is the representative of the solution's dimension. In the process of evolution, the evolution times and the maximum evolution times are, respectively, represented by G and G max . e G-th generation is represented by a vector . Every dimensional lower and upper limits are represented by the vectors X l � (x l1 , x l2 , . . . , x l D ) and X u � (x u1 , x u2 , . . . , x u D ). e initial population is presented as follows: where rand(0, 1) is a random number between the interval [0, 1].

Mutation.
In the G-th generation, each individual, the target vector , uses the mutation operator to produce the mutation vector Figure 2 shows the classic mutation operator DE/rand/1. Selecting different mutation operators will influence the algorithm's performance greatly. Common mutation operators are as follows. DE/rand/1: Mobile Information Systems 3 DE/rand/2: DE/current/1: DE/best/1: DE/current-to-best/1: where r1, r2, r3, r4, and r5 are integers obtained in the interval [1,NP] randomly and r1 ≠ r2 ≠ r3 ≠ r4 ≠ r5 ≠ i. e parameter F is the scaling factor, which is used to control the scale of variation, usually F ∈ (0, 1]. X G best is the most suitable individual in the current population.

Crossover.
After mutation, the target vector of each generation crosses with its corresponding mutation vector to generate an experimental vector . Each dimension of the experimental vector is gained from the mutation vector V G i or the target vector X G i . e specific operation of crossover looks as follows: where i � 1, 2, . . . , NP, j � 1, 2, . . . , D, crossover probability CR ∈ [0, 1]. CR is used to control the probability that the experimental vector inherits the mutation vector. e parameter j rand ∈ [1, D], which is used to make sure that one or more dimensions in the experimental vector, inherits the value of the target vector.
To prevent values from exceeding the upper and lower bounds, they must also be bounded. e specific operation is as follows: where i � 1, 2, . . . , NP, j � 1, 2, . . . , D.

Selection.
Use the greedy strategy between the target vector and the experimental vector: select the best function value to enter the next iteration. Better individuals can be inherited to the following generation via the greedy method, making sure the algorithm convergence. e specific operation process is described as follows: Figure 2: Schematic diagram of DE/rand/1. represents the optimal value. Figure 1: Schematic diagram of PSO. represents the optimal value. 4 Mobile Information Systems where i � 1, 2, . . . , NP andf(x) is the representatives of the function value of the vector x.

The Parameters of CWDEPSO
is section describes the improvements of CWDEPSO parameters. We introduce chaos theory, the improvement of parameters in PSO, and that of parameters in DE, in turn.

Chaos eory.
Chaotic motion is a common nonlinear phenomenon in nature. It looks cluttered on the outside but has subtle regularity on the inside. It possesses the features of regularity, randomness, as well as ergodicity. Chaotic ergodicity is able to experience every state within a definite interval with no repetition. erefore, the optimized search using chaotic variables is superior to the blind random search. ere are many types of chaotic dynamic models. In this paper, logistic mapping [57,58] is used to design chaotic operations. e iterative equation is defined as follows: where y 0 � rand(0, 1) and y 0 ≠ 0.25, 0.5, 0.75 { }. It is completely chaotic when the control parameter 3.57 < μ < 4. e best results are obtained when μ � 4, as shown in Figure 3, so let μ � 4 in this article.

Improvement of PSO Parameters.
e traditional PSO algorithm uses fixed parameter settings during the evolution process. However, many pieces of literatures indicate that PSO is very sensitive to parameter settings [30,[58][59][60][61]. Among them, Zhan et al. [61] proposed APSO, which is a very famous PSO variant that well deals with the parameters. Also, Peng et al. [60] used probabilistic methods to prove that the inertia weight on PSO's convergence is an order of magnitude higher than two learning factors, so the inertia weight can be more necessary to be adjusted in the evolution process of PSO. Among them, for the inertia coefficient ω, Shi and Eberhart [56] proposed a linearly decreasing ω. e reason for this improvement is to increase the global search capability in the early iterations and the local search capability in the late iterations. Correspondingly, a larger inertial weight is helpful for global exploration, and with small inertia weight, it is conducive to local exploration. Afterwards, after many numerous studies [53,58,59], it was found that the nonlinear ω is more helpful for algorithm enhancement. In addition, in this paper, we choose to use nonlinear ω and perform chaos treatment, aiming to enhance the disorder during iteration to enhance the population diversity of the algorithm in the late iteration. In general, we hope that the global exploration competence of the algorithm is strong in the primary period, and the local search competence is improved later to explore the optimal value more accurately. In this paper, we use the arctangent acceleration coefficient (AT) to propose a nonlinearly changing inertial weight and introduce a chaotic mechanism within to avoid a local optimum by increasing the population diversity and thereby improving the search performance of PSO. e update of the inertia weight is presented as follows:where φ, τ, and q are constants (φ � 0.9, τ � 0.65, q � 0.05). q · α · y k is the chaos term, which enables the algorithm to increase chaos characteristics while maintaining the original change trend, and α is the direction factor. e introduction of the direction factor can make ω fluctuate up and down, and in each iteration, the value of α will choose to be equal to 1 or − 1. In each generation, y k is updated by equation (13) and substituted into equation (14). e specific image is shown in Figures 4(a) and 4(b).
Similarly, the learning factor also influences the performance of PSO greatly. c 1 and c 2 are called cognitive and social components, respectively. Within the traditional PSO algorithm, the cognitive component is always equal to the social component, that is, c 1 � c 2 . However, many literatures [30,53,59] have demonstrated that the focus should be on the global exploration ability at the beginning of the iteration; i.e., the social component c 1 is larger, and the focus Mobile Information Systems should be on the local exploration ability at the later part of the iteration; i.e., the cognitive component c 2 is larger. In order to better balance the global exploration and local exploration ability in the iteration, in this paper, we use the sine function to optimize it so that c 1 gradually decreases and c 2 gradually increases. e specific equation is as follows: where z, δ 1 , and δ 2 are constants (z � 2, δ 1 � 2.5, and δ 2 � 0.5). e change in curves of c 1 and c 2 is presented in Figure 5.

Improvement of DE Parameters.
Similar to PSO, the parameters of DE also have a significant impact on the algorithm [21,24,40,41]. e JADE first proposed the use of Cauchy and Gaussian distributions to update parameters and was widely cited in the literature [36,62,63], where the parameters μF and μCR are proposed to be updated by a power function in the literature [62]. Inspired by that, this paper adjusts the scaling factor and the crossover probability by using the parameter control methods below. Similar to JADE, in each generation of evolution, the scaling factors of all the individuals in the population are independently and randomly created by the Cauchy distribution, as follows: where Cauchy(μF, 0.1) is a random number generated by the Cauchy distribution with a scaling factor of 0.1 and a position parameter of μF. If F i > 1, make it equal to 1, and if F i < 0, regenerate. In the current generation, all successful scaling factors F i are stored in set S F . e position parameter μF is initialized to 0.5. At the end of all the generations, μF is updated as follows: where c is a constant between the interval [0, 1] and mean L represents the Lehmer average: In each evolutionary generation, the crossover probability CR of all the individuals within the population is generated independently on the basis of the Gaussian distribution parameters as follows:  where Gaussian(μCR, 0.1) is a random number generated by the Gaussian distribution with a standard deviation of 0.1 and a mean value of μCR. If CR i < 0, let it be equal to 1, and if CR i < 0, let it be equal to 0. In the current generation, all successful crossover probabilities CR i are stored in the set S CR . e position parameter μCR is initialized to 0.5. Unlike JADE, after the end of each generation, the update of μCR adopts the power average, as follows: where mean pow represents the power-mean: where |S CR | represents the cardinality of the set S CR and n is a constant (n � 2).

Dynamic Probabilistic Mutation PSO
Under the premise of ensuring population diversity, PSO can effectively explore the solution area and has a fast convergence speed. However, the search can easily stall due to the lack of diversity during the search procedure. Once trapped in premature convergence, it is difficult to escape from local optimization [64][65][66][67]. To solve this problem, we have to enhance the population diversity in the late iteration in order to avoid falling into local optimum. According to [68], in the paper, we learn that DE effectively helps PSO jump out of the local optimum and improve the population diversity when the PSO algorithm falls into local optimum in the late iteration, and the literature [48,68,69] all shows that the DE algorithm combined with the PSO algorithm can effectively improve the algorithm performance and avoid premature convergence. In addition, some algorithms such as DEPSO [48] substantially increase the computational complexity in each iteration by comparing two different trial vectors when adapting. erefore, in this paper, we choose to help the algorithm improve its diversity and at the same time avoid adding too much computational complexity by observing the results of the previous iterations to predict whether to join the DE algorithm next time. is paper proposes a novel dynamic probabilistic mutation PSO to improve local search in the later iteration. As shown in Figure 6, the improved mutation operator is introduced in PSO for increasing the population diversity and helping the algorithm get away from local convergence. e specific improvements can be seen below.

Monitor Global
Optimum gbest G Update. First, the global optimum gbest G of each generation is monitored. Once the i-th particle of G-th generation fails to update the global optimum, the flag bit c i corresponding to the particle is increased by one. When the flag bit c i of all particles is greater than one, that is, all particles of the G-th generation fail to search the optimal value, it is considered to be trapped within a local optimum, which might affect the effect of population convergence. We will use methods increasing the probability of mutation according to the change in the flag bit c i to aid the population jump out of local convergence.

Improvement of Mutation
Operator. In addition, an appropriate mutation operator needs to be selected. e DE/ current-to-best/1 mutation operator possesses great local development capabilities and fast convergence speed, which can reflect the current particle state, but in comparison, the global development ability is poor. e DE/rand/1 and DE/ rand/2 mutation operators depend on other random individuals, which are conducive to improving the population diversity. e global convergence competence is strong, but the convergence speed is relatively slow. Even if PSO updates the global optimum gbest G unsuccessfully in the current population, the corresponding current particles still have a high development value. Considering that the algorithm is mostly at the end of evolution when it falls into local convergence, it should pay more attention to the local development ability; we thus take the DE/current-to-best/1 mutation operator as the basis and improve it.
e new mutation operator DE/current-to-gbest/1 is as follows: where K is a variable independent of the scaling factor and α is a variable that changes with the number of iterations. Making the variable α gradually increase with the number of iterations can reduce the influence of local convergence of the PSO in the later stage and improve the ability to explore the surrounding area. Mathematically, the variable α is given as follows: In contrast, the population diversity decreases as the number of iterations increases, and the variable K affecting the population diversity should increase as the iterations increase. Otherwise, it will still fall into local convergence. K is defined as follows: By strengthening the exploration of the global situation in the early period and focusing on the exploration of the current local area in the later stage, the balance between local and global convergences is effectively maintained.

4.3.
Processing of Monitoring Data. When all particle flags c i > 1, the count variable count increases by one. With the increase in count, it indicates that the number of convergence failures gradually increases, which means that the algorithm is more likely to fall into the state of local convergence. e initial increase in the variable count has contingency, but the more the count increases, the more likely it falls into premature convergence. erefore, we define a probability p, which represents the probability of the algorithm quoting mutation ideas. e larger the count, the larger the probability p. Inspired by the exponential Mobile Information Systems formula update probability in the literature [49], the paper uses an exponential formula to control the probability p of introducing mutation ideas. e probability p is set as follows: where G max is the evolutions' maximum number. If the flag bit c i corresponding to the i-th particle is greater than the warning value θ, it means that the individual has a high probability of falling into local convergence that is not easy to escape. As a result, the reference value of the particle would be low. is paper sets the warning value θ � (G max/5) in this article and gives it a proper probability to reinitialize it. e specific operation is as follows: e complete monitoring mechanism is shown in Figure 7.

Modification of Selection Method.
After the mutation operation, the algorithm will proceed according to the crossover and selection operation of DE. For the select operation, the fundamental DE and a lot of DE variants adopt the greedy selection method, while greedy selection often leads to the abandonment of many good individuals falling into local convergence [63]. In order to fully mine the effective information of these excellent individuals, we have added a temporary population to save these abandoned individuals temporarily as follows: where temp G holds all the individuals who have been eliminated in the contemporary era. Some of the outstanding individuals in the temp G have a higher probability than the worst individuals of the current updated population. If the worst individual within the brand-new population is worse than the best individual within the temporary population, it is replaced. It is worth noting that if too many individuals are replaced, it will cause local convergence.
On the basis of the analysis before, the pseudocode of CWDEPSO can be seen below (Algorithm 1).

Experimental Results and Discussion
In this section, the performance of CWDEPSO is evaluated by 24 benchmark functions that have been adopted widely in the field of numerical optimization methods [58,70,71], most of which belong to CEC2017 [72,73], as shown in Tables 1-3

Test Function.
e benchmark functions adopted within the article are divided into three categories, and f 1 ∼ f 5 are unimodal functions, where f 5 contains noise interference; f 6 ∼ f 22 are multimodal function. Among them, the local optimum of f 7 is large, and the suboptimal value differs significantly from the global optimum. e characteristic of f 8 is that it has a very narrow global basin, so it can be rather easy to fall into a local optimum. f 23 ∼ f 25 are multimodal benchmark functions with fixed dimensions. e three types are listed in Tables 1-3. Dim and Scope, respectively, represent the solution space dimension and range of the solution, and type represents the type of the current function. Changeable exploration space Figure 6: Schematic diagram of CWDEPSO. represents the optimal value. 8 Mobile Information Systems CWDEPSO is compared with PSO, DE, and their variants AIWCPSO, MPSPSO-ST, JADE, SinDE, and DEPSO in this part. To ensure a fair comparison, we set the parameters in both sets of experiments all according to the original paper, strictly following the steps in the literature [58,59], and the population size is set to 500. It is worth noting that the algorithm CWDEPSO proposed in this paper has the same number of function evaluations as PSO and DE in each iteration, so it can ensure fairness in the case of the same number of iterations. Also, the initial populations of all algorithms are generated with random numbers, and each group is run twenty times independently. e specific parameters are set as follows: PSO setting: Choose whether to initialize the particle using equation (27) Update pbest and gbest Mutation operation using equation (23) crossover and boundary processing using equations (10) and (11), respectively, and using equations (12) and (28) to get X i and temp i Update parameters μF i and μCR i using equations (18) and (21), respectively Update the velocity V i using equation (1) and using equation (2) to update the position of the particle X i  Table 4, with the optimal values identified in bold. For the (1) Step 1: initialization (2) Parameter initialization: NP, D, G max , G � 1, μF � μCR � 0.5, c � 0 (3) Generate initial population: pop 0 (4) Find the initial pbest and gbest of the population (5) Step 2: update parameters (6) While iter < iter max (7) Update the inertia weights ω adopting equation (14) (8) Update the social component c 2 and the cognitive component c 1 using equations (15) and (16)  (9) Calculate F i and CR i of each particle using equations (17) and (20) (10) Update the probability P using equation (26)  (11) Step 3: finding the optimal value through iteration (12) For i � 1: NP do (13) Update the velocity V i adopting equation (1)  (14) Using equation (2) to update the position of the particle X i (15) End For (16) If rand(0, 1) < P (17) For i � 1: NP do (18) Mutation operation using equation (23)  (19) Crossover and boundary processing using equations (10) and (11), respectively (20) Using equations (12) and (28) to get X i and temp i (21) Update parameters μF i and μCR i using equations (18) and (21), respectively (22) End For (23) Select the better one between the worst of temp i and the best of X i (24) Update pbest and gbest (25) Else (26) Calculate the fitness of particles in the population and update pbest and gbest

Function
Scope Dim Name Sum of different powers purpose of facilitating the comparison of the mean and the S.D., we represent the data with a radar chart, as shown in Figure 8. Figure 9 represents the trajectory of the average of the algorithm on the current test function.
As shown in Table 4, the single-peaked functions f 1 ∼ f 5 are significantly more accurate for the optimization of CWDEPSO compared with the other seven algorithms, achieving the best results in each index. Among the

Function
Scope Dim Name  10,10] 30 Levy High conditioned elliptic HGBat 30 Weierstrass HappyCat Schaffer's F7 Noncontinuous rotated Rastrigin 30 Penalized 1 5,100,4) [− 50,50] 30 Penalized 2 Mobile Information Systems found by DEPSO and JADE, respectively, but for f 18 , CWDEPSO performs best in the best and successfully achieves the theoretical optimal solution of the function. For the fixed-dimensional multimodal benchmark functions f 22 , f 23 , and f 24 , CWDEPSO still has a good performance. It performs slightly worse than JADE on function f 23 and achieves the best performance in f 24 . In accordance with [70], to make the comparison results statistically significant and reliable, Wilcoxon's rank-sum test was used to test the results of different algorithms for significant differences at 5% significance level. If there is a significant difference, then it is 1; otherwise, it is 0. Table 5 shows the results of the Wilcoxon's rank-sum test for each algorithm on each function. We can find that, inf 1 Figure 9 shows the convergence images of the eight algorithms in the 10 tested functions (f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , f 8 , f 9 , f 10 , f 13 , f 17 , f 22 ). From Figure 9(a), it can be seen that CWDEPSO has the best performance in this function. In Figure 9(b), we can find that CWDEPSO has the largest convergence accuracy and convergence speed. As shown in Figure 9(c), CWDEPSO still has the best performance in terms of convergence accuracy and speed. In Figure 9(d), CWDEPSO can effectively overcome the noise interference of this function, and the convergence accuracy and speed are optimal. In terms of multimodal functions, DEPSO shows the best performance in Figures 9(e) and 9(h). In Figure 9(f ), CWDEPSO has a higher convergence accuracy than the remaining seven algorithms. In Figure 9(g), CWDEPSO has excellent global and local search capability and can escape local extremes effectively. In Figure 9(i), CWDEPSO is clearly superior to the remaining seven algorithms. In Figure 9(j), JADE achieves the highest convergence accuracy, but CWDEPSO converges very close to it and converges faster. In Figures 9(k) and 9(l), CWDEPSO outperforms all other algorithms with the fastest convergence speed and the best search accuracy.
In this paper, comparing algorithm CWDEPSO to PSO, DE adds some parameter operations. Among them, the scaling factor F and crossover probability CR in the DE algorithm account for most of them, which increases the running time a lot. It is known that the time complexity of both PSO and DE is O(G max · Np · D) [41]. Compared with PSO, CWDEPSO adds the introduction of the DE algorithm, which has a time complexity of O(G max · Np · D), and also adds the parameter update process, which has a complexity of O(G max · Np), so the complexity of CWDEPSO is O(G max · Np · D). Table 6 shows the running time required for each algorithm until all iterations are completed. We can find that PSO requires the least amount of computing time, followed by AIWCPSO, but DEPSO takes a very long time. Also, the algorithms proposed in this paper, CWDEPSO and JADE, run in approximately the same time. It is worth noting that CWDEPSO takes less time than MPSPSO-ST in   Tables 4 and 5 and Figure 9, CWDEPSO has better convergence speed and search accuracy than PSO, DE, AIWCPSO, MPSPSO-ST, JADE, SinDE, and DEPSO, so combining DE and PSO in this way can effectively improve the algorithm. e performance proves that CWDEPSO is a more efficient algorithm for numerical optimization.

Comparison of CWDEPSO with Other Algorithms.
In this part, we compared CWDEPSO algorithm with the optimization algorithms put forward recently, including the BBO [5], SCA [6], MFO [7], ABC [11], and GSA [12]. To ensure a fair comparison, we set the parameters in both sets of experiments all according to the original paper, where the number of generations is set to 500 and the size of the population is set to 40. Each is conducted independently 20 times, and the details are listed as follows: GSA setting: G 0 � 100, α � 20.    Tables 1-3. e outcomes gained are presented in Table 7 and Figure 10. Table 7 contains the best results, worst results, mean, and S.D. e greatest values are presented in bold. Figure 11 shows the S.D. and the mean radar charts.
According to Table 7, for the unimodal functions f 1 ∼ f 5 , the CWDEPSO algorithm performs well in terms of search stability and accuracy, and the convergence accuracy of CWDEPSO algorithm is higher compared with the other five optimization algorithms. For the multimodal functions f 6 , f 7 , f 8 , f 10 24 ), and SCA(f 7 , f 13 , f 17 , f 23 ). Figure 10 gives the convergence images for each algorithm on twelve test functions (f 2 , f 3 , f 5 , f 6 , f 7 , f 9 , f 11 , f 12 , f 13 Figure 10(j), CWDEPSO is able to search for the optimal value and provides the highest convergence accuracy.
In conclusion, the work done in this paper is beneficial, which can improve the performance of the algorithms significantly. As an innovation in this paper, we introduce DE algorithm in the late stage of PSO, which improves the capability of algorithm development and exploration, enhances the population diversity, and effectively avoids premature convergence. Although the introduction of some parameters and other operations increase the runtime of the algorithm, it significantly improves the performance of the algorithm. In the improvement of inertia coefficients ω, through the introduction of nonlinear and chaotic mechanisms, the global search ability in the early stage is strengthened along with the local search ability in the later stage, and the inclusion of chaos theory can make the algorithm more diverse and avoid falling into local extremes. e nonlinear optimization for the learning factors c 1 and c 2 enhances the late local exploitation ability and makes the algorithm less likely to converge prematurely. e two parameters, DE deflation factor F and crossover probability CR, are adaptively processed through the Cauchy and Gaussian distributions, which can be automatically updated to a suitable value by the performance of previous generations, thus improving the robustness and accuracy of the algorithm. Finally, the value of the particles is more fully utilized through an improved selection strategy, which helps to improve the algorithm performance. In summary, CWDEPSO can deal with numerical optimization problems effectively.

Engineering Application and Analysis
In this section, we use the CWDEPSO algorithm to optimize the two sets of engineering structures for spring tension design and pressure vessel structure design.

Spring Tension Design.
Spring tension design is a common engineering problem in practice, and many works [74,75] have studied it. e problem is composed of three decision variables: coil diameter (d), average coil diameter (D), and coil winding number (P), which are represented by x 1 , x 2 , and x 3 , respectively, in this paper. Figure 12 shows the spring tension structure diagram. Our goal is to make the spring as light as possible when the three decision variables meet the constraints. e constraint problem of spring tension design is shown as follows: For this engineering design, we use the metaheuristic algorithm to solve it. We choose renewing the particles whenever the constraints are violated. CWDEPSO is used to solve the spring tension optimization problem, and the results are compared with those of other metaheuristic algorithms AIWCPSO [52], ABC [11], JADE [45], CPSO [25], and GSA [12]. e results of the comparison are shown in Table 9.
From the data in Table 9, it can be seen that CWDEPSO performs best among the six algorithms and can successfully find excellent solutions under several constraints. CPSO and JADE got the second and third best solutions, respectively; and the value of the obtained decision variables differs significantly from that obtained by CWDEPSO, which indicates that CWDEPSO has a strong exploration ability and can effectively find the best value. Besides, AIWCPSO and ABC performed moderately, while GSA, which did well in the previous section, performed worst. It shows that these algorithms will fall into a local extremum when dealing with complex multiconstraint problems, and it is difficult to converge to the optimal value. Compared with them, CWDEPSO is less affected in dealing with the optimization of the spring tension structure.

Pressure Vessel Design.
is part uses the experiment of pressure vessel design first proposed by Sandgren [76] to verify the performance of the CWDEPSO algorithm further. It contains four decision variables: cylindrical container thickness (T s ), hemispherical cover thickness (T h ), cylin-  Figure 12: Schematic diagram of spring tension design.  Figure 13: Schematic diagram of pressure vessel design. 28 Mobile Information Systems drical container inner diameter (R), and cylindrical container length (L), which are represented by x 1 , x 2 , x 3 , and x 4 , respectively. e mathematical model of the experiment is shown in Figure 13.
We can see from Table 10 that the CWDEPSO algorithm is the most accurate than the other five algorithms in the optimization of the pressure vessel design. MFO performed the second best, but there was a big gap between variable x 4 and CWDEPSO, indicating that CWDEPSO had better exploration ability. AIWCPSO, SCA, and CLPSO have a general performance. Compared with CWDEPSO, there are some differences in the obtained decision variables and solution results, which shows that CWDEPSO has better search accuracy. Finally, the performance of BBO is unsatisfactory, and the convergence speed is slower than other algorithms. rough the above two experiments, we can find that CWDEPSO is feasible and effective in the treatment of engineering structure optimization. Besides, it performs well in the treatment of engineering structure optimization, and its application scope is relatively broad.

Summary and Future Work
In this paper, for the purpose for overcoming the shortcomings of PSO and improving its performance of engineering structure design, the CWDEPSO algorithm is proposed. We introduce mutation operation and other methods according to the probability related to the current convergence failure time.
is strengthens the communication between individuals within the population and improves the diversity of the population in the later iterations of the algorithm, effectively helping PSO get rid of premature convergence. In addition, nonlinear dynamic optimization and adaptive adjustment of PSO and DE parameters are performed, respectively, and chaotic processing is added to the inertia weight. By optimizing the parameters, the algorithm can strengthen the global search ability in the early stage of iteration and focus on local search in the late stage of iteration. Moreover, the randomness and ergodicity brought by chaos make the algorithm more powerful in exploration. CWDEPSO can be used to solve numerical optimization and practical problems as well as swarm intelligence algorithms such as MBO, EWA, EHO, MS, SMA, and HHO. In this paper, 24 benchmark functions and two groups of engineering optimization experiments have been adopted for verifying the efficiency of the strategy put forward in the paper. In the primary stage of the tests, CWDEPSO is compared with PSO and DE and their classical variants. e tentative outcomes present that CWDEPSO has better results for almost all benchmark functions and has excellent robustness and stability. In the second set of experiments, CWDEPSO is compared with five common metaheuristic algorithms. e results suggest that CWDEPSO outperforms other algorithms in most benchmark functions. Finally, CWDEPSO and some metaheuristic algorithms are applied to solve the engineering structure optimization experiment in the last set of experiments. Compared with other algorithms, CWDEPSO obtained the best solution in both spring tension structure optimization and pressure vessel structure optimization. In addition, compared with other metaheuristic algorithms, CWDEPSO has the advantages of low time complexity and simple parameter setting, enhanced local search capability, is less likely to fall into local optima, and has faster convergence speed. To sum up, the CWDEPSO algorithm proposed in this article is successful and suitable for optimization problems in the field of numerical and engineering structure optimization.
In addition to optimizing engineering structures, it can also be widely used in many fields such as national defense, transportation, agriculture, chemical industry, materials, and communications. For example, making a phone call in life includes the process of finding the lowest cost and least congested route; in the design of mechanical structures, the cost will be minimized while meeting the stress requirements; in the field of robotics, it can be applied to path planning in the dynamic environment of mobile robots. However, for some functions, the effect of CWDEPSO is not ideal, and it is not easy to find the optimal value. erefore, improving the stability of the CWDEPSO algorithm is still worth investigating further. In future work, we will try to learn from APSO [61] to further improve the performance of CWDEPSO algorithm, and CWDEPSO will be used to resolve more practical application problems.

Data Availability
e data used to support the findings of this study have been deposited in the GitHub repository (https://github.com/ MaZhiteng/A-Hybrid-Dynamic-Probability-Mutation-Parti cle-Swarm-Optimization-for-Engineering-Structure-Design .git).
Mobile Information Systems 29