Porcellio scaber Algorithm with t -Distributed Elite Mutation for Global Optimization

As the convergence accuracy of the Porcellio scaber algorithm (PSA) is low, this study proposes an improved algorithm based on the t -distribution elite mutation mechanism. First, the improved algorithm applies a t -distribution mutation to each dimension of the optimal solution of each generation. Using the dominant information of the optimal solution and t -distribution characteristics, the result of the mutation is then employed as the updated location of the selected porcellio; thus, the algorithm enhances the ability to jump out of local extreme values and improves the convergence speed. Second, the updated iterative rule of PSA may lead to the loss of information of the elite porcellio in the last generation. To solve this problem, the judgment mechanisms of the current and previous optimal solutions are included in the algorithm process. Finally, dynamic self-adaptive improvement is applied to the weight allocation parameters of PSA. Simulation results on 24 benchmark functions show that the improved algorithm has significant advantages in convergence accuracy, convergence speed, and stability when compared with basic PSA, PSO, GSA, and FPA. This indicates that the improved algorithm has certain advantages in terms of optimization. An optimal solution with good practicability is obtained by solving three practical engineering problems: three-bar truss, welded beam, and tension/compression-spring design.


Introduction
Many real-world applications involve complex optimization problems [1]. Swarm intelligence optimization algorithms can solve problems by simulating the intelligent characteristics and behavior modes of biological populations, which have high self-organization, self-adaptability, generalization, and abstraction capabilities [2,3]. ey can quickly and approximately solve certain NP-hard problems in the real world and have become an important method to effectively solve complex science and engineering optimization problems [4,5]. Since the inception of swarm intelligence optimization algorithms, several classical algorithms, such as the genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], and ant colony optimization (ACO) [8], have emerged. e general framework of swarm intelligence optimization algorithms starts with a set of stochastic solutions and utilizes a series of metaheuristics to explore and exploit the search space to nd the best approximate solution to the optimization problem. Maintaining population diversity and nding e ective exploration and exploitation methods and a balance between them are the main issues in the research on swarm intelligence optimization algorithms [1,9]. e non-free lunch theorem [10] indicates that no swarm intelligence optimization algorithm can solve all practical optimization problems, which motivates the development of a swarm intelligence optimization algorithm research field. In recent years, the research directions of swarm intelligence algorithms can be divided into three categories [1,11]: (1) discovering the behavioral characteristics of biological communities in nature to design new algorithms. To this end, certain swarm intelligence optimization algorithms, such as the black widow optimization algorithm (BWO) [12], flower pollination algorithm (FPA) [13], whale optimization algorithm (WOA) [14], sparrow search algorithm (SSA) [15], Coronavirus herd immunity optimizer (CHIO) [16], chameleon swarm algorithm (CSA) [17], and white shark optimizer (WSO) [1] have emerged. (2) Integrating the advantages of different swarm intelligence algorithms to form hybrid algorithms. For example, some studies [4,[18][19][20][21] combine two or more algorithms to improve the efficiency of solving optimization problems. (3) Improving the existing algorithms; the commonly used ones include chaotic maps [22,23], cellular [9,24], and mutation operators [25,26]. e objective functions of many real-world science and engineering problems are mostly nonlinear, nondifferentiable, multimodal, and high dimensional. Traditional gradient-based local optimization methods are not very effective [2]. Global optimization methods based on swarm intelligence do not require gradient information, only knowledge of the input and output parameters of the problem, and have shown good performance in solving complex optimization problems [5]. e areas that involve these problems include vehicle routing [20], coordinating the charging schedule of electric vehicles [27], feature selection [28], flow-shop scheduling [29,30], wireless sensor network deployment [31], and single batch-processing machines [32].
Porcellio scaber algorithm (PSA) [33] is a new swarm intelligence optimization algorithm, biologically designed by the two main survival principles of Porcellio scaber. Currently, PSA has been successfully applied in engineering fields, such as microgrid optimization [34,35] and pressure vessel design [36]; however, as an emerging algorithm, its relevant research and applications are in their infancy.
PSA has two main shortcomings. First, the iterative updating rules of the algorithm should make full use of the information of the previous generation of optimal Porcellio scaber; however, the algorithm lacks the judgment operation of the contemporary (k + 1 generation) and the previous generation (k generation) optimal solutions. e contemporary (k + 1 generation) optimal solution may not be as good as the previous generation (k generation) optimal solution; if the contemporary (k + 1 generation) optimal solution is directly brought into the next generation (k + 2 generation) operation, the information of the previous generation (k generation) optimal solution is lost, which affects the convergence speed of the algorithm. Second, the algorithm easily falls into the local extreme value, especially when dealing with high-dimensional optimization problems, which can easily lead to a premature algorithm and convergence stagnation.
To solve these problems, this study proposes an improved PSA with t-distribution elitist mutation (TPSA). e main contributions of this study are summarized as follows: (1) e TPSA algorithm mainly implements dimensionby-dimension t-distribution perturbation on the position vector of each generation of elite Porcellio scaber and takes the variation result as the updated position of the selected Porcellio scaber with a certain probability. It fully uses the advantageous information contained in the elite Porcellio scaber and controls and leads the Porcellio scaber population to quickly approach the optimal solution with the help of the t-distribution characteristics to improve its convergence accuracy and speed. (2) e judgment operation of the contemporary optimal solution and the optimal solution of the previous generation in the PSA algorithm process is added and the dynamic adaptive improvement of weight distribution parameters is carried out. is improves the balance between exploration and exploitation and improves performance. (3) e performance of TPSA is verified in 24 wellknown benchmark functions and compared with the comparison algorithms. e practicability of TPSA is verified by three real-world engineering problems, and the best solution obtained is compared with the algorithms reported in recent years. e rest of the paper is organized as follows: Section 2 introduces the principle of the PSA algorithm, Section 3 describes the TPSA algorithm improvement strategy and algorithm implementation process in detail, Section 4 demonstrates the experimental results and analysis of convergence and stability, Section 5 introduces the application of TPSA in three real-world engineering problems, and finally, Section 6 concludes the paper.

Porcellio scaber Algorithm
Porcellio scaber is a worldwide animal species, as shown in Figure 1 [33]. Porcellio scaber prefers to live in damp, dark, and humus-rich places and in groups. Several studies [37,38] have shown that P. scaber has two behaviors regarded as their survival laws: (1) group behavior-aggregation and (2) individual behavior-propensity to explore novel environments. When the living environment is unfavorable, they explore a new environment separately; conversely when environmental conditions are favorable, they stay together.
PSA uses the fitness function as a scale to evaluate the advantages and disadvantages of the porcellio scabers' living environment, and the mathematical modeling of the two survival rules of the Porcellio scaber is used as the rules of the algorithm update iteration. Its mathematical description is as follows.
In the d-dimensional search space, the population consisting of N porcellio scabers is X � (X 1 ,X 2 , . . ., X n ), and the vector X i k � (x i1 k , x i2 k , . . ., x ij k . . ., x id k ) T represents the position of the ith Porcellio scaber of the kth generation. e movement of the aggregation behavior of the Porcellio scaber is modeled according to 2 Scientific Programming Equation (1) can also be expressed as From equation (1), all porcellio scabers eventually stay in the place with the best environmental conditions in the initial position; however, if the initial conditions are the worst environment, only the porcellio scabers with the aggregation behavior given in equation (1) cannot survive. e actual movement of the Porcellio scaber should be the weighted result of the two behaviors of gathering and exploring the new environment. erefore, the final updating rule of PSA is given in the literature [33], which is calculated according to where λ ∈ (0, 1) is the weight allocation parameter of the two behaviors of gathering and exploring new environment tendencies, and the value of λ can be different for different porcellio scabers. pτ represents the motor behavior of the Porcellio scaber exploring a new environment, where τ is a d-dimensional random vector and p is a function of the intensity of the exploratory action, p � f (x i k +τ). Each Porcellio scaber randomly selects any direction around its center of mass to explore the new environment. p is the simplest way to select a function that represents the fitness of the Porcellio scaber as the intensity of the behavior of exploring the new environment.
e literature [33] provides the following definition of p with good performance:

t-Distribution Elite Dimensional Variation Strategy.
Each generation of the optimal Porcellio scaber is the elite of the Porcellio scaber population. e advantage information contained in it plays a key role in the updating and iteration of the algorithm, which enables its rapid convergence to the global optimal solution and improves its accuracy. To this end, the TPSA algorithm defines an elite mutation probability parameter P t around an elite Porcellio scaber and performs t-distribution mutation on the position vector of the optimal Porcellio scaber in each generation dimensionby-dimension. e mutation result is directly given to the selected Porcellio scaber with P t probability as the new location of position update, making full use of the advantage information of the optimal Porcellio scaber. e search space of the Porcellio scaber is controlled by the characteristics of the t-distribution, and the algorithm is guided to converge to the optimal solution quickly. is is expressed as follows: where c is the scaling coefficient of the step size and t(iter) is the t-distribution of freedom with the number of iterations of the algorithm iter. e curve shape of the t-distribution is related to the degrees of freedom. When the degree of freedom of the tdistribution is 1, its curve shape is similar to that of the Cauchy distribution; as the degree of freedom increases, the curve shape gradually approaches a Gaussian distribution, as shown in Figure 2. In the early iterations of the TPSA algorithm, the iter value is small. e t-distribution presents the characteristics of the Cauchy distribution, which can mutate and generate the next generation of porcellio scabers far away from the optimal Porcellio scaber position. us, the diversity of the Porcellio scaber population can be maintained to search in a larger space, improve the exploration ability, and prevent the algorithm from falling into local extremum and convergence stagnation. As the algorithm runs iteratively to the middle and late stages, the iter value increases, and the t-distribution presents the characteristics of a Gaussian distribution, which can mutate around the optimal Porcellio scaber to generate the next generation of porcellio scabers with a certain probability; this improves the exploitation ability of PSA and increases the convergence accuracy. (3), when PSA performs a position iterative update, the k+1-generation Porcellio scaber position update requires the k-generation optimal Porcellio scaber position information. However, according to the implementation process PSA in the literature [33], the algorithm does not fully consider that, if the optimal solution of k + 1 generation is inferior to that of k generation, and the information of the optimal solution of k + 1 generation is included in the calculation of the position update of k + 2 generation; it easily causes the loss of the information of the optimal solution of k generation. erefore, after updating the contemporary position, a judgment mechanism between the contemporary and previous generation optimal solutions is added and the execution process is as follows:

Optimal Porcellio scaber Retention Strategy. From equation
if MinFitness (k + 1) < MinFitness (k) Proceed with Algorithm 1 in Section 3.4 for the next iteration else Here i is the number of the kth generation optimal Porcellio scaber, d is the dimension of the search space, and MinFitness (k) is the optimal fitness of the kth generation Porcellio scaber population. If the optimal fitness of the Porcellio scaber population of generation k+1 is less than that of the population of generation k, the next iteration continues according to the execution process of Algorithm 1.
Otherwise, the optimal Porcellio scaber information of generation k population will be reserved for generation k + 1 and be included in the position update calculation of generation k + 2.

Dynamic Adaptive Strategy of Weight Allocation
Parameters.
e proportion of exploration and exploitation in PSA is controlled by the weight allocation parameter λ. In PSA, λ is a constant, which has a significant influence on the results of different optimization problems and is difficult to determine. To this end, this study introduces the concept of inertia weight inspired by the literature [7] and uses the following equation to calculate λ.
where [min, max] is the upper and lower boundary of λ, max iter is the maximum number of iterations of the algorithm, and iter is the current number of iterations. In this manner, λ adaptively adjusts dynamically as the algorithm runs. In the early stage of the algorithm, λ is large and mainly controls the algorithm's exploration, which is conducive to expanding the search space and improving the convergence speed. In the middle and late stages of the algorithm, λ decreases gradually, and the algorithm mainly performs exploitation, which helps improve the convergence accuracy.

TPSA.
After adopting the t-distributed elite mutation, optimal Porcellio scaber retention, and dynamic adaptive strategy of weight allocation parameter, the TPSA algorithm is as follows:

Time Complexity Analysis.
Given that the objective function of the optimization problem is f(x), the dimension of the search space is d. According to the description of PSA and the operation rules of time complexity, the time TPSA adds the calculation of t-distribution elite variation, dynamic weight parameters, and optimal Porcellio scaber retention strategy. According to the execution process of Algorithm 1, the time complexity of each incremental part can be deduced.
elite variation) + T(dynamic weight parameters) + T(retention strategy of optimal Porcellio scaber). After simplification, . e time complexity of TPSA and PSA is of the same order of magnitude, with an insignificant increase and no negative impact on the algorithm.

Experiment Design and Parameter Settings.
TPSA was compared with classical and emerging intelligent algorithms such as PSA [33], PSO [7], GSA [39], and FPA [13] on 24 benchmark functions to verify the performance of the TPSA.   Table 1, including nine high-dimensional unimodal functions, nine high-dimensional multimodal functions, and six low-dimensional functions.
Without loss of generality, the algorithm was independently run 50 times for each function, and the worst, best, mean, standard deviation, and success rate were calculated as evaluation metrics for the algorithm performance. e optimization precision was set to 10 − 10 and the success rate of optimization was calculated as follows: Success rate (SR) � (optimization success number) (total number of experiments) .
In the experiment, if |actual solution value − theoretical optimal value |<10 − 10 , the search was considered as a success.

Parameter Settings.
e population size of the five algorithms was N � 30, and the number of iterations was set to 1,000. rough a large number of numerical experiments, we concluded that TPSA performs better by setting the following parameters: P t � 0.3, a step-size scaling coefficient on f 13 , f 19 -f 22 as 30, and the others as 1; λ linearly reduced from 0.9 to 0.2. e PSA weight allocation parameter λ � 0.8. e PSO learning factor c 1 � c 2 � 2, the inertia weight linearly reduced from 0.9 to 0.4, and the maximum speed was half of the search space. e GSA gravitational constant G 0 � 100 and acceleration a � 20.
e FPA probability conversion parameter p � 0.2.

Results and Discussion.
e experimental results and correlation analysis are presented separately according to the type of each benchmark function. Table 2 shows the experimental results of the five algorithms on nine high-dimensional unimodal functions. For the success rate of the search, TPSA had a success rate of 100% for all nine functions, and the success rate of the comparison algorithms was mostly zero or lower, except for GSA, which had a success rate of 100% on the f 1 function. For the convergence to the theoretical optimal value of the function, TPSA found the theoretical optimal value 0 of eight functions except f 4 , and the robustness of the algorithm was strong from the mean and standard deviation indicators. PSA, GSA, and FPA could not find the theoretical optimal value of all functions. PSO, which sets the maximum speed to half of the search space, observed the theoretical optimal value of the function on f 1 , f 4 , and f 6 functions; however, its robustness was poor, and judging from the mean and standard deviation indicators, the success rate of the search was low. For the f 4 function, although TPSA could not converge to the theoretical optimal value 0 of the function, the mean value of the algorithm was 1.7908E -290, whereas the worst value reached 5.2583E − 289, with a standard deviation of approximately zero. is indicates that TPSA is more stable and robust than PSA, PSO, GSA, and FPA on Set the objective function f(x), x � [x 1 ,x 2 , . . ., x d ] T Set the algorithm parameters: population size N, elite mutation probability parameter P t , the upper and lower boundaries of the weight allocation parameter λ, and step scale coefficient c.

Type I: High-Dimensional Unimodal Function.
Randomly generate an exploration environment vector τ � [τ 1 , τ 2 , . . ., τ d ] T Calculate the optimal environmental conditions min{E x } and the worst environmental conditions max{E x } at position x i k +τ for i � 1 to n Calculate the action intensity p according to Equation (4 Scientific Programming this function. In essence, TPSA achieved a better performance in convergence accuracy and robustness than PSA, PSO, GSA, and FPA when searching for a high-dimensional unimodal function.

Type II: High-Dimensional Multimodal Functions.
ese functions often have numerous local minima distributed in the solution space and the global optimal solution is not easily found; however, according to the experimental results in Table 3, TPSA can converge to the theoretical optimal value 0 on f 10 , f 12 , f 14 , f 15 , f 16 , and f 17 functions. Considering the mean and standard deviation indicators, the robustness of the algorithm was good, and the success rate of finding the optimal value was 100% except f 18 ; PSA, GSA, and FPA could not find the theoretical optimal value, and the success rate was zero. Although PSO found the theoretical optimal values of f 10 , f 12 , and f 16 functions, judging from the mean and standard deviation indicators, its robustness was poor and the success rate of finding the optimal values for f 10 , f 12 , and f 16 was low, at 26%, 38%, and 12%, respectively. Evidently, it could not find the theoretical optimal value of other functions. On the f 11 function, TPSA outperformed all other comparison algorithms except PSO, which is comparable to TPSA in terms of optimal value. On the f 13 function, TPSA was inferior to PSO and GSA in terms of optimal value metrics, but better than PSA and FPA; TPSA was better than PSA, PSO, GSA, and FPA in terms of worst value and standard deviation metrics. On the f 18 function, none of the algorithms could find the theoretical

Category
Function Scientific Programming optimal value of the function, but the optimal accuracies of TPSA and GSA convergence were significantly better than the other algorithms. is indicates that the algorithm is more stable. In conclusion, the performance of TPSA on high-dimensional multimodal functions was significantly better than that of the other algorithms, showing that the improvement strategy of this study is effective.

Type III: Low-Dimensional Function.
e global optimum of a low-dimensional function is usually surrounded by many local extrema and the function shows strong oscillation characteristics; the global optimum is not easily found. erefore, it is often used as a test of the exploration capability of a swarm intelligence algorithm [40]. Table 4 presents the experimental results. With respect to the f 19 and f 20 functions, the success rate of TPSA was 100%, and the worst, mean, and standard deviation were better than those of the PSA, PSO, GSA, and FPA algorithms. TPSA was comparable to PSO in the f 21 function and to FPA in the f 22 function, outperforming other comparative algorithms. For f 23 and f 24 functions, TPSA and PSO achieved the best search performance, and the theoretical optimal value − 1 was successfully found in 50 experiments, with a search success Scientific Programming rate of 100%. e optimization performance of PSA, GSA, and FPA was poor as they could not find the theoretical optimal value of the function, and the search success rate was low.

Convergence and Stability
Analysis. e fitness convergence curve for some functions is shown in Figure 3, and a boxplot of the experimental results is shown in Figure 4. Figures 3 and 4 showed the advantages of the optimization performance of TPSA over the other algorithms. e theoretical optimal values of the selected functions were zero.
For the convenience of display and observation, the fitness of the algorithm for all functions is treated with a logarithm base of 10. Figure 3 shows that TPSA converged to the theoretical optimal value of the function, whereas PSA, PSO, GSA, and FPA could not. Particularly, for f 6 , f 10 , f 14 , and f 15 functions, TPSA converged to the theoretical optimal value with only a few iterations. erefore, its convergence speed has obvious advantages over the other algorithms. Figure 4 shows that TPSA converged to the theoretical optimal value of the function each time, and its convergence accuracy and stability were better than those of PSA, PSO, GSA, and FPA. e convergence results of PSA showed a low precision and high fluctuation; although PSO converged to the theoretical optimal value on f 1 , f 6 , and f 10 functions, the fitness fluctuation was strong, and the algorithm performance was unstable. e fitness fluctuation of GSA and FPA improved; however, neither of them could converge to the theoretical optimal value of the function.
In conclusion, TPSA has significantly improved convergence accuracy, convergence speed, robustness, and stability, compared with the other algorithms and is competitive among intelligent optimization algorithms.

Wilcoxon Rank-Sum test.
As the performance of the algorithm cannot be fully described using only the mean and standard deviation, a statistical validation is required [41]. erefore, in this study, the Wilcoxon rank-sum test was chosen to verify whether TPSA is significantly different from PSA, PSO, GSA, and FPA at the p � 5% significance level to determine the level of superiority and inferiority among them. When p <5%, it is considered that the H0 hypothesis is rejected and there is a significant difference between the two algorithms. Conversely, there is no obvious difference between the two algorithms, and their performance is equivalent. Table 5 shows the values of the Wilcoxon rank-sum test p of TPSA and other algorithms for all benchmark functions, where "N/A" indicates that the performances of the two algorithms are quite incomparable. TPSA cannot be compared with itself and is marked as "N/A", whereas "+", "− ," and "�" indicate that TPSA is superior, inferior, or equivalent to the other algorithms, respectively. e value of p in Table 5 is greater than 0.05 only when TPSA is compared with PSO on the f 13 function, and with FPA on the f 22 function, which is marked in bold in the table; it is less than 0.05 in other cases. Table 5 shows that the optimization performance of TPSA was significantly better than that of PSA, PSO, GSA, and FPA, thus verifying the effectiveness of the improved strategy in this study.

TPSA in Higher-Dimension Performance Analysis.
To further verify the stability and convergence accuracy of TPSA in higher dimensions, three functions, f 5 , f 12 , and f 15 , were selected from two types of high-dimensional unimodal and multimodal test functions for simulation experiments in 500, 1,000, and 2,000 dimensions. e program independently ran 50 times, and the mean and standard deviation were used as evaluation metrics. Table 6 presents the experimental results. It is observed that, with a significant increase in the function dimension, the mean value and standard deviation of fitness of TPSA were 0 and remained unchanged, while the optimized mean value of PSA  increased significantly. is shows that TPSA did not fall into a "dimension disaster" with an increase in dimensions, which reduces the optimization performance. Compared with PSA, TPSA had a stronger ability to jump out of the local extreme value to maintain a good stability and could converge to the theoretical extreme value in higher dimensions. is further demonstrates the effectiveness of the t-distributed elite mutation strategy proposed in this study.

Strategy Effectiveness
Analysis. PSA using only the tdistribution mutation strategy, optimal Porcellio scaber retention strategy, and dynamic adaptive weight allocation strategy were denoted as TMPSA, ORSPSA, and DAPSA, respectively. PSA was compared experimentally with the three single-strategy improved algorithms described above, and the results are listed in Table 7. As seen in Table 7, for unimodal and multimodal functions of f 1 , f 3 , f 5 , f 7 , f 9 , f 11 , f 13 , f 15 , and f 17 , the three strategies improved PSA performance overall, in which the t-distribution mutation strategy played a key role. For the functions f 3 , f 9 , and f 13 , TPSA obtained the optimal solution with the integrated action of the three strategies. For the fixed-dimensional multimodal functions f 19 -f 22 , the optimization results of ORSPSA and DAPSA were comparable to PSA, but TMPSA performed worse. However, as shown in Table 4, with the integrated action of the three strategies, TPSA had 100, 100, 96, and 86% of the search success of them, respectively. e optimization performance was substantially improved compared with same of PSA; therefore, the strategy proposed in this paper is effective.

Engineering Applications of TPSA
To further study the performance of TPSA, it was applied to solve three practical engineering problems-three-bar truss, welded-beam, and tension/compression-spring designs-and the results were compared with those reported in the literature.
ese engineering application problems are multiconstraint optimization problems; however, PSA  cannot solve constrained optimization problems. is study uses the penalty function in the literature [15] to transform constrained optimization into unconstrained optimization for the solution. Figure 5 illustrates the three-bar truss structure. e height of the truss is H; the cross-sectional areas of each bar are A 1, A 2 , and A 3 ; and the concentrated load is p. e problem is solved to minimize the volume of a three-bar truss under stress constraints, deflection, and buckling. is problem requires the optimization of two variables (A 1 and A 2 ) to adjust the cross-sectional area of each bar. e problem is mathematically described as follows:

ree-Bar Truss Design Problem.
where 0 ≤ x 1 , x 2 ≤ 1, l � 100cm, p � 2kN/cm 2 , and σ � 2kN/cm 2 . e population size was set to 50 and the maximum iteration to 1,000. e optimal solutions of TPSA and PSA were compared with the optimal solution of the algorithm reported in the relevant literature. e results are listed in Table 8, which shows that the optimal solution solved by TPSA is superior to PSA, CS [42], HHO [43], SC-GWO [44], and GLF-GWO [45] and equivalent to the results of AEO [46]. is shows that TPSA can optimize the design of the three-bar truss with a good effect.

Welded-Beam Design Problem.
e welded-beam structure is shown in Figure 6 with four design parameters: weld thickness (h), length of the clamped bar (l), height of  Figure 5: ree-bar truss design.
the bar (t), and thickness of the bar (b). e optimal design should minimize the construction cost under the constraints of shear stress τ, bending stress σ, buckling load P c , and deflection δ. e mathematical description of the problem is as follows: where where p � 6000, L � 14, E � 30 × 10 6 , L � 14, and G � 12 × 10 6 . Variable range is 01 ≤ x 1 , x 2 ≤ 2, 0.1 ≤ x 3 , x 4 ≤ 10. e population size was set to 50; the maximum number of iterations was 10,000, the step-size scaling factor c was 2; and the program independently ran 30 times. TPSA and PSA were used to solve the welded-beam design problem and the results were compared with those of the algorithms reported in the relevant literature. A comparison of the optimal solution and statistical results are presented in Tables 9 and 10, respectively. It is observed that the optimal solution by TPSA is superior to PSA, CPSO [47], IGMM [48], TEO [49], IGWO [50], SFOA [51], CS-BSA [52], and WAROA [53] and is equivalent to the results of AEO [46]. However, the mean and standard deviation of TPSA are better than those of AEO.
is indicates that TPSA is better than the others in solving welded-beam design problems.

5.3.
Tension/Compression-Spring Design Problem. Figure 7 shows the structure of the tension/compression spring with three design parameters: average coil diameter (D), number of active coils (p), and wire diameter (d). e design goal was to minimize the weight under certain constraints. e mathematical description of the problem is as follows: where .05 ≤ x 1 ≤ 2.00, 0.25 ≤ x 2 ≤ 1.30, and 2.00 ≤ x 3 ≤ 15.00. e population size was set to 50; the maximum number of iterations was 1,000; the step-size scaling     Figure 7: Tension/compression-spring design.  Tables 11 and 12, respectively. Tables 11 and 12 show that the optimal solution of TPSA is superior to PSA, HMPA [4], AEO [46], CPSO [47], IGWO [50], SFOA [51], and PO [54] and is equivalent to the results of the TEO [49] and WAROA [53]. However, the worst and average values of TPSA are better than those of WAROA. is indicates that TPSA can effectively solve a tension/compression-spring design problem. Table 12 shows that the optimal solution of PSA is inferior to all other algorithms and its average value and standard deviation are relatively large. is indicates that PSA is unstable in solving this problem. TPSA results are superior to those of the algorithms reported in the literature, which further shows that the improved strategy in this study effectively improves the performance of PSA.
TPSA mutates the elite Porcellio scaber using the t-distribution operation. e degrees of freedom of the t-distribution are the number of iterations of the algorithm, and their values dynamically increase as the algorithm runs. us, its morphology changes dynamically from a Cauchy to Gaussian distribution. In the early stage of the algorithm, the dynamically changing characteristics maintain the diversity of the population to a large extent so that the population can sufficiently explore the search space. In the late stage of algorithm operation, the t-distribution mainly presents Gaussian distribution characteristics, and the mutated population implements fine exploitation in a smaller space around the optimal Porcellio scaber, which improves the convergence accuracy while avoiding the algorithm from sinking into a local optimum. e optimal Porcellio scaber retention strategy in TPSA partially overcomes PSA's disadvantage of losing the optimal solution of the previous generation. TPSA's dynamic adaptive improvement of weight distribution parameters improves the balance of exploration and exploitation and enhances performance. In summary, the TPSA assembled with these strategies has good performance; therefore, good optimal solutions are obtained when solving the three engineering problems.

Conclusions
PSA is a new swarm intelligence optimization algorithm, which has the disadvantages of low convergence accuracy and easy premature convergence. To address these issues, an improved PSA (TPSA), based on the t-distribution elite mutation mechanism, is proposed. First, using t-distribution for each generation of elite Porcellio scaber for a dimensionby-dimension mutation, the characteristics of t-distribution are used to maintain population diversity, thus enhancing the algorithm's ability to explore and exploit the global and local space. Secondly, the judgment mechanism between the contemporary optimal solution and the previous generation optimal solution is added to the algorithm process to address the shortcoming that the basic PSA may lead to the loss of information of the previous generation of elite Porcellio scaber. Finally, dynamic adaptive improvements are made to the weight assignment parameters of PSA to achieve a balance between exploitation and exploration and improve its performance in finding the best solution. e performance and practicality of TPSA were evaluated on 24 benchmark functions and applied to three realworld engineering problems. First, the convergence accuracy, convergence speed, and stability of TPSA were evaluated on 24 benchmark functions, including highdimensional unimodal, high-dimensional multimodal, and low-dimensional functions. e results show that TPSA has significant advantages over basic PSA, PSO, GSA, and FPA. is was also proved by the Wilcoxon rank-sum test on the experimental results. To further validate the performance of TPSA, experiments were conducted in 500, 1,000, and 2,000 dimensions on some functions, and the results show that TPSA converged to the theoretical optimum of the function without falling into "dimension disaster," which further demonstrated its good convergence and stability. In the three practical engineering problems of three-bar truss, welded-beam, and tension/compression-spring designs, the optimal solution of TPSA was better than those of the algorithms reported in related literature, which verified its practicality. However, it should also be noted that TPSA is inferior to PSO and GSA in optimizing some functions. erefore, in the future, we will continue to improve the

Data Availability
e experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this paper.