An Adaptive Chaotic Sine Cosine Algorithm for Constrained and Unconstrained Optimization

Sine cosine algorithm (SCA) is a new meta-heuristic approach suggested in recent years, which repeats some random steps by choosing the sine or cosine functions to find the global optimum. SCA has shown strong patterns of randomness in its searching styles. At the later stage of the algorithm, the drop of diversity of the population leads to locally oriented optimization and lazy convergence when dealing with complex problems. .erefore, this paper proposes an enriched SCA (ASCA) based on the adaptive parameters and chaotic exploitative strategy to alleviate these shortcomings. Two mechanisms are introduced into the original SCA. First, an adaptive transformation parameter is proposed to make transformation more flexible between global search and local exploitation. .en, the chaotic local search is added to augment the local searching patterns of the algorithm. .e effectiveness of the ASCA is validated on a set of benchmark functions, including unimodal, multimodal, and composition functions by comparing it with several well-known and advanced meta-heuristics. Simulation results have demonstrated the significant superiority of the ASCA over other peers. Moreover, three engineering design cases are employed to study the advantage of ASCA when solving constrained optimization tasks. .e experimental results have shown that the improvement of ASCA is beneficial and performs better than other methods in solving these types of problems.

is makes it valued and applied in many fields. For example, Yang et al. [31] proposed a multigroup multistrategy SCA (MMSCA) for solving the capacitated vehicle routing problem in transportation. Of course, SCA also has deficiencies, and it will inevitably fall into local optimality, but its simple structure makes the improvement of the algorithm have great potential. We hope to make up for the shortcomings of SCA by adding mechanisms, so that it can have better effect.

Literature Review.
ere are still many problems that need to be solved with the existing intelligent algorithms [32][33][34][35][36][37][38]. For example, when the convergence speed of an algorithm is slow, it is easy to fall into local optima [4,[39][40][41][42][43][44][45]. In order to solve these problems, many scholars have carried out related researches [41,[46][47][48][49][50][51][52][53]. Nenavath and Jatoth [54] proposed a new optimization algorithm called hybrid SCA with a differential evolution algorithm (DE), which has a more exceptional ability to jump out from local optimization compared with the standard SCA and DE, and then they applied it to object tracking. Rizk-Allah [55] presented the multiorthogonal sine cosine algorithm (MOSCA) by combining the SCA with a multiorthogonal search strategy (MOSS). MOSS can help SCA with in-depth exploration and search, eliminating the disadvantages of unbalanced exploitation and easy to fall into local optimal value. Abd Elaziz et al. [56] used opposition-based learning (OBL) to optimize SCA, which enabled SCA to analyze the solution space better and increase the accuracy of the optimization process. Zhang et al. [57] proposed an elite opposition-based algorithm named the sine cosine WWO algorithm (SCWWO). e waveform of the water wave is very similar to the sine and cosine curves. SCA has a robust global search capability. e combination of SCA and WWO improved the exploitation and exploration capabilities of the algorithms.
Rizk-Allah [58] improved the SCA via orthogonal parallel information (SCA-OPI) in which multiple-orthogonal parallel information is used to maintain the diversity and enhance the exploration search, and an opposition direction strategy based on experience preserves the exploration capability. Qu et al. [59] presented a modified SCA based on neighborhood search and greedy Levy mutation. First, to balance the global search capability and local exploitation capability of the algorithm, the method of exponential decreasing conversion parameter and linear decreasing inertia weight is adopted. Second, to escape from local optimization easily, they replaced the optimal individuals with the random ones near them. Finally, the greedy Levy mutation strategy was used for the optimal individuals so that the local exploitation capability could be improved. Tawhid and Savsani [60] proposed an efficient multiobjective SCA (MO-SCA). e algorithm employed the elitist nondominated sorting and crowding distance method to keep the diversity of optimal solution sets. Sindhu et al. [61] proposed an SCA for feature selection using an elitism strategy and a new updating mechanism. It was used for feature selection to find out the optimal feature to improve the classification accuracy. Zamli et al. [62] proposed a new hybrid Q learning strategy called Q learning sine cosine algorithm (QLSCA). e QLSCA dynamically identifies the best practices at run time with Q learning algorithms (based on punishment and reward mechanisms) and adds Levy flight (LF) motion and crossover to facilitate the algorithm jumping out of local optimum and increase solution diversity. Turgut [63] proposed a hybrid optimization algorithm by combining the advantages of the backtracking search algorithm (BSA) and SCA, which achieved the optimal design of a shell and tube evaporator. Chegini et al. [64] combined the PSO algorithm with the SCA position update equation and the LF method to propose a new hybrid algorithm PSOSCALF. LF is a random walk that generates the search step by the Levy distribution. As the number of jumps increases, more effective searches were performed in the search space, which improved the search capability of the original SCA and avoided trapping into the local minimum. Issa et al. [65] combined the PSO with the SCA, but the difference is that Issa et al. improved the SCA by introducing tuning parameters in the SCA, which was named ASCA-PSO. Another improved SCA called M-SCA was proposed by Gupta and Deep [66], and they used the relative numbers based on the perturbation rate to generate the opposite population so that the population diversity of the original algorithm was increased. Abdel-Fatah et al. [67] proposed a modified version of SCA called MSCA. e MSCA is dependent upon LF distribution and adaptive operators for enhancing the searching capabilities of the basic SCA. Huang et al. [68] proposed a new, improved SCA called CLSCA. e improved algorithm combines a chaotic local search (CLS) mechanism and an LF operator. Abdo et al. [69] proposed a newly developed version of GWO called DGWO. e DGWO applied a random mutation to avoid the algorithm falling into local optima. It also improved the exploitation process by updating the populations in a spiral path. Zhang et al. [44] proposed a new version of FOA with a Gaussian mutation operator and the CLS strategy called MCFOA. It avoids premature convergence by using a Gaussian mutation operator and then used CLS to improve local search capability. Gupta et al. [70] proposed a modified version of the SCA called MSCA. MSCA introduced a nonlinear transition rule and the leading guidance based on the elite candidate solution. Besides, MSCA used the mutation operator to generate a new position when the newly added mechanism cannot provide a better solution. Xu et al. [43] proposed an improved MFO called CLSGMFO. Gaussian variation is used to improve the population diversity of MFO. en, CLS is applied to the flame updating process of MFO to make better use of the solution locality. Taher et al. [71] proposed a modified grasshopper optimization algorithm (MGOA) to solve the optimal power flow (OPF) problem. MGOA is modifying the mutation process in the traditional GOA to avoid falling into local optima. Mohamed et al. [72] used GWO to solve the distribution problem of PV and DTSTACOM and proved its effectiveness by simulation experiments. Anguluri [73] proposed a bee-inspired algorithm based on observing the foraging and mating behavior of honey bees. Zhao et al. [74] combined 2 Complexity the dynamic multiswarm particle swarm optimization (DMS-PSO) algorithm with the subregion coordinated search algorithm (SHS) to get an improved algorithm called DMS-PSO-SHS. Long et al. [75] proposed an improved SCA solution to the high-dimensional global optimization problem. ey used a modified position-updating equation based on inertia weight and a nonlinear conversion parameter strategy based on the Gaussian function to improve the performance of the SCA.

Contribution and Paper
Organization. e main contribution of this paper is as follows: a new adaptive parameter r 1 updating formula is proposed. SCA involves less parameters than other methods. Adjusting these parameters has an opportunity to improve the performance of the algorithm. So that, the r 1 update formula is replaced, where r 1 is a conversion parameter which affects the performance of the algorithm directly. An improved SCA called ASCA is proposed.
e disadvantages of the SCA algorithm are apparent. e local convergence speed of the algorithm is weak in the later stage. To expand the ability of local exploitation, chaos-based local search is added based on the changing parameter r 1 .
en, compared with the other competitive rivals, the superiority of ASCA is proved. e impact of both mechanisms on SCA was experimentally tested on a comprehensive set of benchmarks. ASCA has better results in engineering problems than other peers. e structure of this paper is presented as follows. Section 2 gives an overview of SCA. Section 3 describes ASCA in detail. Sections 4.1 to 4.5 show the experimental results. Both mechanisms are tested in Section 4.6. In Section 4.7, the experimental results are statistically analyzed. Section 4.8 presents the application of ASCA to engineering cases. e research results are summarized in Section 5.

Sine Cosine Algorithm (SCA)
SCA starts from a set of randomly generated solutions. rough formula updating, the algorithm achieves the global optimal solution continuously after the global exploration stage and the local exploitation stage.
First, the algorithm initializes the position of the solution, calculates the fitness, and records the position of the optimal solution. en, the position is updated according to where t represents the current iteration number, X t i represents the components in dimension i at iteration t, P t i is the ith dimension component of the optimal population individual after iteration t, and r 1 is responsible for a linear decreasing function, which can be formulated as follows: where a is a constant, T is the maximum number of iterations, and r 2, r 3, and r 4 are the random numbers in the ranges [0, 2π], [−2, 2], and [0, 1], respectively. e updated solution is calculated for its fitness and then compared with the current optimal solution, and it will be replaced if a better solution is obtained. is is an iterative process. Algorithm 1 presents the pseudocode of SCA, and Figure 1 shows the concrete process.

Proposed ASCA
Compared with other algorithms, the original SCA has the advantages of simple structure and fewer parameters. e algorithm just relies on the sine and cosine functions for iteration to find the optimal solution. Original SCA has good global search ability, but its parameters cannot be suitable for the search process in the later period of the algorithm, which will slow down the convergence speed and reduce the population diversity. In this paper, two improved mechanisms are proposed to improve the search ability of SCA in the later period.

Adaptive Parameter.
Original SCA has four main parameters, namely, r 1 , r 2 , r 3 , and r 4 , mentioned earlier, respectively. e values of sine and cosine reflect the stage of the algorithm. When the value of the sine function (r 1 sin (r 2 )) or the cosine function (r 1 cos(r 2 )) is between −1 and 1, the algorithm is in the local exploitation stage; otherwise, it is in the global search stage. Although both parameters r 1 and r 2 involved in the formula have an effect on the value, r 1 has a more significant influence. Original r 1 has a form of equation (2) which will decrease linearly with the iteration. However, the search process of the SCA is complex and not linear. Equation (2) does not meet the requirements. In order to make SCA have better computing power, r 1 is used to balance the global search and local exploitation of the algorithm. is idea has been mentioned in [75], but the idea of this paper is different from that one. We do not want the adaptive r 1 to be reduced too quickly in the early stages because that is not conducive to SCA global search. According to the operation process of SCA, the algorithm first enters the global search and then enters the local exploitation.
is requires the r 1 update formula to have a larger value in the early stage to ensure a better global search capability and then to enter the local exploitation stage with a smaller value. erefore, a new r 1 updating formula is proposed, as shown in the following formula: where T represents the maximum number of iterations and t represents the current number of iterations.

Chaotic Local Search (CLS).
Chaos is one of the most common phenomena in nature, which looks disorganized but has a delicate internal structure. e main characteristics of chaos are "randomness," "ergodicity," and "regularity." Because the update process of SCA is entirely random, the    [76][77][78]. ere are many kinds of chaotic systems. In this paper, we choose a common logistic map, as shown in the following equation: where k is the number of iterations and a is the control parameter. When a � 4 and y 1 ∉ 0.25, 0.5, 0.75, 1 { }, equation (4) is a chaotic system.
Local search (LS) can be used to search within the restricted region. Using LS to search the neighborhood of the current optimal solution has the opportunity to obtain a higher quality of the solution. However, in the case of insufficient probe depth, LS can also cause the algorithm to fall into local optimization. CLS is exactly the way to avoid this problem. It can help the algorithm avoid premature convergence due to the "randomness" of a chaotic system. e local search for chaos is shown in the following equation: where V represents the new location formed by CLS, X best is the current optimal solution, lb and ub represent the lower and upper limits of the space, y k represents the chaotic sequence formed by equation (4), and λ is obtained from the following: where T is the maximum number of iterations and t is the current number of iterations.

Proposed ASCA.
is section adds the two improvements mentioned above to SCA and details the entire ASCA process. In ASCA, first, the population is initialized like SCA and then r 1 is updated by equation (3). After updating with sine or cosine, chaotic, random search is performed near the current optimal solution and new individuals are generated. Algorithm 2 shows the pseudocode of the ASCA. erefore, its final complexity is as follows:

Experimental Results and Discussions
In this section, the ASCA is compared with the traditional meta-heuristic algorithm, improved SCA variants, and improved meta-heuristic algorithms on 31 benchmark functions. All algorithms mentioned above were coded on MATLAB R2014b. For fair experimentation, we tested all the algorithms under uniform conditions. Table 1 lists the parameters involved in the simulation experiment. N is the number of populations; dim represents a dimension; MaxFEs represents the maximum number of function evaluations; Flod is the number of random runs. e parameters in Table 1

Benchmark Function Validation.
In this section, we will describe in detail the 31 benchmark functions which were used to test the performance of ASCA. e formulas of the functions are presented in Table 2, where Dim is the dimension, Range means the limit of the search space, and F (min) represents the optimal solution. We divide the 31 functions into three categories. F1∼F7 are unimodal functions, which have only one global best solution, excluding the trap of local optimum. F8∼F13 are multimodal functions, which have a great quantity of local optimum. F14∼F31 are composite functions, which are more complex. e composite functions were taken from the last eight functions in CEC14 and the last ten functions in CEC17. e purpose of choosing different functions is to test the performance of ASCA more comprehensively and observe whether ASCA can meet the requirements of various problems.

Scalability Test.
To carry out a comprehensive evaluation of the performance of ASCA, various dimensions of ASCA and SCA were tested on functions F1-F13, while other conditions are kept unchanged. In this experiment, four dimensions of 100, 200, 1000, and 2000 were selected to focus on how the change of problem dimension and overall proportion affects the algorithm performance. Table 3 shows Complexity Initialization algorithm parameters: N is the population size, dim is the dimension, a is the control parameter, and T is the maximum iteration number. Set current iteration number t � 0. Initialize population X  (3) Update X by equation (1) Update chaotic sequence by equation (4) Update V by equation (5) Calculatethe fitness of V and update the X best Update λ by equation (6) No Yes Yes No Figure 2: Flowchart of ASCA. 6 Complexity the experimental results, where Avg and Std represent the mean value and the standard deviation of the results, respectively. e experimental results show that ASCA performance does decrease with the increase of the problem dimension compared with the low dimension. However, ASCA still has an advantage over SCA in these 13 benchmark functions. When the dimension reached 2000, the performance of SCA had declined significantly, but ASCA could still achieve good results. It shows that ASCA can maintain the performance of the algorithm in high dimensions.

Comparison with Conventional
Algorithms. In this section, ASCA was compared with 8 well-known algorithms in this field, namely, whale optimization algorithm (WOA), gray wolf optimizer (GWO), moth-flame optimization (MFO), bat algorithm (BA) [79], gravitational search algorithm (GSA) [80], sine cosine optimization (SCA), firefly algorithm (FA) [81], and particle swarm optimization (PSO). Table 4 presents the experimental data of ASCA and conventional meta-heuristic algorithms on 31 benchmark functions in detail. In Table 4, Avg represents the results of all test algorithms and Std represents the standard deviation. e smaller the Avg value, the higher the quality of the solution, and the smaller the Std, the more stable the algorithm. e symbols "+," "−," and "�" in the table represent that ASCA is superior, inferior to, and equal to other algorithms. In the last two columns of the table, Avg represents the average ranking of the test algorithm in each function; Rank indicates the rank of the average value of the Friedman test rank.
According to the results in Table 4, the Avg value of ASCA is 2.39, while the Avg value of SCA is 5.74, indicating that ASCA has a significant performance improvement over SCA. Among the nine algorithms tested, the average ranking of ASCA is the highest, which indicates that ASCA has a better comprehensive level in the selected function compared with SCA and other algorithms. Table 5 shows the p values for ASCA and other algorithms via the Wilcoxon signed-rank test [82]. A p value of less than 0.05 indicates that the proposed algorithm is statistically significantly improved compared with other algorithms. As can be seen from the table, compared with FA, GSA, and PSO, the p value is greater than 0.05 only once, indicating that ASCA has significantly improved most functions. It must be mentioned that in the last few composite functions, the p values are all less than 0.05, which indicates that ASCA performs well in these functions. Table 6 lists the running time of ASCA and other algorithms in seconds. e time complexity of each algorithm is certain, so the ranking of operation time is basically the same. It can be seen from the table that PSO takes the least time to run. e running time of ASCA is basically ranked second. Better quality solutions with less time can also indicate that improvements to SCA are effective. e convergence curves of the algorithms can be seen in Figure 3. For F5 and F6, ASCA still has the opportunity to find better solutions in the later iteration of the algorithm after all other algorithms are stabilized. For F11∼F14 and F24∼F31, the quality of the ASCA solution is the highest, which indicates that ASCA performs better in multimode and composite functions. From the above experimental analysis, compared with the original SCA, ASCA has a smaller improvement in unimodal functions but a significant improvement in multimodal and composite functions.

Comparison with Advanced Algorithms.
In this section, to further verify the performance and advantages of the ASCA algorithm, a set of algorithms, including OBSCA [56], ASCA_PSO [65], m_SCA, CBA [83], RCBA [84], and CDLOBA [85], are compared with ASCA on 31 benchmark functions. All the tested algorithms are tested under a unified framework. Table 7 shows the results, and it can be seen from the table that OBSCA and m_SCA are two powerful competitors of ASCA. Both algorithms performed well in unimodal functions, but they were ranked lower than ASCA in the end because ASCA performed very well in multimodal functions and composite functions. ASCA is stable in the top three in multimodal functions, which indicates that ASCA is easier to jump out of the local optimum than other improved algorithms.
A p value of less than 0.05 in the Wilcoxon signedrank test indicates that the proposed algorithm is statistically significantly improved compared with other algorithms. e smaller the sort value, the better the method. Table 8 records the p values of ASCA and other algorithms. From Table 8, all the other algorithms have only one function with a p value higher than 0.05 except m_SCA and OBSCA.
is shows that ASCA has better comprehensive performance compared with other improved algorithms. Table 9 lists the running time of ASCA and other SCA improvement algorithms in seconds. As we mentioned before, the time complexity of the algorithm is roughly the same as the order of the running time. It can be seen from the table that OBSCA takes the least time to run. e running time of ASCA is ranked second. Compared with other improved algorithms, the time complexity of ASCA is relatively low. e convergence curves of the ASCA algorithm and other modified, improved algorithms are shown in Figure 4. For F5, F12, and F13, ASCA has better convergence performance and continues to decline in the later stage of the algorithm, while other algorithms have entered the local optimum. For F11, the convergence speed of ASCA is higher than other algorithms. Although the    8 Complexity convergence speed of m_SCA is overridden in the late stage, the quality of the solution obtained is lower than that of ASCA.
In general, compared with other improved mechanisms, the mechanism used in this paper improves the balance of SCA search exploitation more effectively. e strong ability to jump out of the local optimum of the algorithm enables ASCA to get a better solution.
is makes up for SCA's shortcomings.

Comparison with SCA Variants on CEC 2014.
In this section, to verify the performance and advantage of the ASCA, we compared OBSCA, SCADE, CESCA [86], and CLSCA with ASCA on 30 benchmark functions of CEC 2014. Table 10 records experimental comparisons between ASCA and other algorithms on the CEC 2014 functions. As can be seen from Table 10, ASCA has been ranked first in most cases on the first 22 functions. is suggests that ASCA performs better in unimodal functions, simple multimodal functions, and hybrid functions. e last eight are composite functions, and the best of which is CLSCA. However, we can see that there is not a big difference in the average between them.
A p value of less than 0.05 of the Wilcoxon signed-rank test indicates that the proposed algorithm is statistically significantly improved compared with other algorithms. e convergence curves of ASCA and other modified SCA variants are shown in Figure 5. From Figure 5, we can see that the convergence of ASCA on the unimodal functions is faster than other improved algorithms. In F15, we can see that the ASCA curve was relatively stable for some time in the early period and then continued to decline. is shows that ASCA did not fall into the local optimum and successfully found a better solution.

e Impact of CLS and Adaptive r 1 .
e improved SCA introduces two strategies, namely, adaptive r 1 and CLS. In this section, we examine the impact of these two strategies on SCA. CLSSCA only uses the CLS strategy, ADSCA only changes the adaptive r 1 , and ASCA uses both strategies. Figure 6 shows the results of the qualitative analysis of 23 benchmark functions by ASCA and three other algorithms. e graphs of the first column show the three-dimensional location distribution of ASCA search history. e graphs of the second column show the two-dimensional location distribution of ASCA search history. e graphs of the third column show the trajectory of ASCA. e graphs of the            fourth column show the average fitness of ASCA, and the graphs of the fifth column show the convergence curve of the algorithms. Figures 6(a) and 6(b) record the location and distribution of individuals in each iteration. In Figure 6(b), we can see that most of the search locations are around the optimal solution, and a small number of locations are scattered throughout the space. It shows that ASCA can be developed in the target area after searching most of the space. In Figure 6(c), we can see that the curve fluctuates significantly in the early stage, indicating that ASCA has good search ability and can traverse the space as much as possible. Figure 6(d) shows the change in average fitness. e figure shows that the curve fluctuates greatly but drops rapidly, indicating that ASCA has good convergence ability. Figure 6(e) shows that ASCA can find the optimal solution faster than the other three test algorithms.
For further exploration, we analyze the balance and diversity of these four algorithms on the 31 functions in this paper. Figure 7 shows the balance analysis of the four algorithms. We have added an incremental-decremental curve to the figure. It can be seen from the figure that when the value of the exploration effect is greater than or equal to the exploitation effect, it is an increment. Conversely, it is a decrement. When the value is negative, it is set to zero. erefore, its high value represents a wide range of exploration activities, while its low value represents a strong exploitation effect. Similarly, the duration of high or low values in the graph reflects the continuing effect of exploration or exploitation in the search strategy. When the exploration and exploitation effects are at the same level, the incremental-decremental curve maximizes. As can be seen from the figure, the search strategy is exploitation most of the time to get a better solution. e exploration effect is always very short compared with the exploitation effect. We can see that as the function starts to become more complex, SCA will take longer to explore, which instructions its poor exploitation capability. From the diagram of ADSCA, it can be seen that the addition of adaptive r 1 can effectively balance the exploration and exploitation phases of SCA. Equilibrium is the result of the search mechanism used by each meta-heuristic scheme. ASCA and CLSSCA have similar search mechanisms so that the equilibrium response is similar to the graph. Numerically, the exploitation rate of ASCA is slightly higher than that of CLSSCA. Figure 8 displays the evolution of the diversity of the algorithms during the optimization procedure. e X-axis represents the number of iterations and the Y-axis represents the diversity measurement. We can clearly see that all algorithms start with vast diversity due to random initialization. As the number of iterations increases, the population diversity decreases gradually. e curve of   20 Complexity requires a certain number of algorithms and functions to be compared. At least more than 10 benchmark functions and more than 5 algorithms are required to participate in the test. e Friedman statistic requires calculating the mean ranked value. For the best out of k algorithms on the ith function, rank 1 is assigned; for the second best, rank 2 is assigned. After that, the average ranking of an algorithm is computed. Under the null hypothesis, which states that all the algorithms are equivalent, their ranks should be equal. To see whether the null hypothesis is rejected, a comparison is needed to review the critical values obtained for the significance level (a � 0.05 and 0.1) with Friedman statistics. e formula and explanations can be found in [87]. If the null hypothesis is Friedman rejected in tests, we can continue with Bonferroni-Dunn's test.
In this paper, three groups of comparative experiments were conducted firstly. For all three groups of experimental results, the null hypothesis was rejected. We can continue with Bonferroni-Dunn's test to them. In the Bonferroni-Dunn's test, the quality of two algorithms is significantly different if the corresponding average of rankings is at least as great as its critical difference (CD). e formula for the CD is as follows: where q a is the critical value for multiple nonparametric comparisons, k is the number of algorithms, and N is the number of functions. e first groups of experiments are ASCA in comparison with conventional algorithms. In this experiment, nine algorithms were used, so k is equal to 9. irty-one functions were selected for the experiment, so N is equal to 31. e significance level was selected as 0.05 and 0.1, respectively, and then the CD was calculated. e calculated CD is 1.89 when a � 0.05. When a � 0.1, the CD is 1.73. Figure 9 records the results of the Bonferroni-Dunn test for the first groups of experiments. e y-axis in the figure represents the average ranking of the algorithm. To compare the difference between ASCA and other algorithms, ASCA is set as the control algorithm.
e horizontal lines in the figure represent the threshold of the ASCA. e dotted line in the figure represents the threshold when the significance level is 0.05, and the solid line represents the threshold when the significance level is 0.1. When the average ranking of other algorithms is higher than this threshold, there is a significant difference between this algorithm and ASCA. From Figure 9, we can see that the average ranking of ASCA is lowest at 2.39. e performance of ASCA was significantly better than MFO, BA, GSA, SCA, FA, and PSO at both significance levels.
We performed the same Bonferroni-Dunn for the second and third experiments. Figure 10 shows the results of the Bonferroni-Dunn test for the second group of experiments. In this experiment, ASCA is compared with some advanced algorithms. 7 algorithms and 31 functions are involved in the comparison, so k � 7 and N � 31. e calculated CD is 1.45 when a � 0.05. When a � 0.1, the CD is 1.31. We can see from the figure that the performance of ASCA was significantly better than that of OBSCA, ASCA_PSO, CBA, RCBA, and CDLOBA at both significance levels. Figure 11 shows the results of the Bonferroni-Dunn test for the third group of experiments. In this experiment, ASCA compares the SCA variants to CEC14. 5 algorithms and 30 functions are involved, so k � 5 and N � 30. The calculated CD is 1.01 when a � 0.05. When a � 0.1, the CD is 0.91. We can see from the figure that the performance of ASCA was significantly better than SCADE and CESCA at the significance level of 0.1.

Application to Engineering Benchmarks.
In this section, the effectiveness of ASCA was further validated on the following engineering problems: tension-compression spring problem (TCSD), pressure vessel design (PVD), welded beam design (WBD), and rolling element bearing design problem (RED).

TCSD Problem.
e goal of this engineering problem is to minimize the weight of the tension-compression string. In this engineering problem, the variables are wire diameter (d), mean coil diameter (D), and the number of active coils (N). e mathematical model of the problem is as follows:

Complexity 23
ASCA is applied to the TCSD problem, and the convergence of ASCA on this problem is recorded in Figure 12.
To verify ASCA's ability to solve this engineering problem, PSO, GSA, SCA, CGWO [88], and EPO [89] are chosen to compare with ASCA. ASCA, PSO, GSA, and SCA used 2000 iterations in this experiment. e CGWO and EPO data in the table are taken from the original papers. CGWO and EPO used 500 iterations and 1000 iterations, respectively. Table 12 records the optimal solutions of tension-compression string design problems. Table 13 records the comparison results of the involved algorithms. From the table, we can see that ASCA has made great progress on this engineering issue compared with the classic SCA. e standard deviation of ASCA in the table ranks second, indicating that ASCA is also relatively stable on this engineering issue.

PVD Problem.
is problem aims to minimize the design cost of a cylindrical pressure vessel. One section of the container is covered, while the other end is hemispherical. Its cost is related to the four variables of thickness shell (T s ), thickness head (T h ),inner radius (R), and section range minus head (l). e mathematical model is presented as follows:

24
Complexity  Table 14 records the optimal solutions of various methods to the pressure vessel design problem. Table 15 records the comparison results of involved algorithms. As seen from the table, the performance of ASCA on this problem is not the best, but it is far better than some conventional algorithms.

WBD Problem.
In this problem, a welded beam is designed for minimum cost subjected to constraints. e purpose of this problem is to find the lowest cost of welded beams under the four constraints of shear stress (τ), bending stress (θ), buckling load (P c ), and deflection (δ).
is problem involves the following four variables: welding seam thickness (h); welding joint length (l); beam width (t); beam thickness (b). e mathematical model is as follows:
(12) Figure 14 shows the convergence curve for ASCA on this engineering problem.
To solve this problem, this paper selects PSO, RCBA, EPO, WCA [90], and MBA [91] to compare with ASCA. ASCA, PSO, RCBA, and EPO used 2000 iterations in this experiment. WCA and MBA data in the table are taken from their respective papers. WCA used 30000 iterations, and MBA used 2,000 iterations. Table 16 records the optimal solutions of ASCA and other peers to the WBD problem. Table 17 records comparison results of the involved algorithms. We can see that ASCA has achieved the smallest value among the involved algorithms.

RED Problem.
e objective of this problem is to maximize the dynamic load-carrying capacity of a rolling element bearing. ere are 10 decision variables, such as pitch diameter (D m ), ball diameter (D b ), number of balls (Z), inner (f i ) and outer (f o ) raceway curvature coefficients, K Dmin , K Dmax , ε, e, and ζ. e mathematical representation of this problem is given as follows: 28 Complexity  Figure 13: Convergence curves of ASCA on the PVD problem.   Figure 14: Convergence curves of ASCA on the WBD problem.
32 Complexity Figure 15 shows the convergence curve for ASCA on this engineering problem.
To solve this problem, SCA, m_SCA, MFO, SCADE, EPO, and MBA are chosen to compare with ASCA. ASCA, SCA, m_SCA, MFO, and SCADE used 3000 iterations in this experiment. EPO and MBA data are taken from their respective papers. EPO uses 1000 iterations in its paper. MBA uses 15100 iterations in this paper. Table 18 records the optimal solutions of ASCA and other peers to the problem. Table 19 records comparison results of involved algorithms. ASCA has no outstanding effect on this project. As can be seen from the results in the table, ASCA has no particularly outstanding performance on this engineering issue. However, ASCA has a significant improvement over both the original SCA and the improved SCA, and it can obtain better solutions and more stability.

Conclusions and Future Works
is paper introduced an adaptive transformation strategy and a chaotic local search strategy to improve the performance of the original SCA. e adaptive transformation strategy was designed to balance the global exploration and local exploitation of SCA. e chaotic local search strategy was introduced to further explore the promising area around the optimal solution found by SCA. e proposed ASCA has been compared with various well-known and advanced meta-heuristics on a comprehensive set of benchmark problems. e experimental results of benchmark functions show that the proposed ASCA is superior to the involved competitive peers in convergence accuracy and solution accuracy. Additionally, for dealing with engineering cases, we can see that ASCA has improved significantly compared with SCA and some other SCA variants. Of course, compared with some other excellent algorithms, there is still a gap, and we need to continue to study further to improve the proposed method in the future.
In the future work, the effectiveness of tuning other parameters of SCA can be investigated. e proposed ASCA can be applied in many other scenarios, such as image segmentation, clustering, parameter tuning for deep learning models [92][93][94], social manufacturing optimization [95], and video coding optimization [96].

Data Availability
e data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article.

Authors' Contributions
Guoxi Liang and Huiling Chen contributed equally to this work.