Spiral Motion Enhanced Elite Whale Optimizer for Global Tasks

The whale optimization algorithm (WOA) is a high-performance metaheuristic algorithm that can eﬀectively solve many practical problems and broad application prospects. However, the original algorithm has a signiﬁcant improvement in space in solving speed and precision. It is easy to fall into local optimization when facing complex or high-dimensional problems. To solve these shortcomings, an elite strategy and spiral motion from moth ﬂame optimization are utilized to enhance the original algorithm’s eﬃciency, called MEWOA. Using these two methods to build a more superior population, MEWOA further balances the exploration and exploitation phases and makes it easier for the algorithm to get rid of the local optimum. To show the proposed method’s performance, MEWOA is contrasted to other superior algorithms on a series of comprehensive benchmark functions and applied to practical engineering problems. The experimental data reveal that the MEWOA is better than the contrast algorithms in convergence speed and solution quality. Hence, it can be concluded that MEWOA has great potential in global optimization.

Swarm methods have emerged in many fields as they can deal with a wide variety of problems in a flexible way. Some well-distributed techniques are slime mould algorithm [64], Harris hawks optimization (HHO) [65], naked mole-rat algorithm (NMR) [66], hunger games search (HGS) [67], moth search algorithm (MSA) [68], monarch butterfly optimization (MBO) [69], krill herd algorithm (KH) [70], teaching-learning-based optimizer (TLBO) [71], differential evolution (DE) [72], and differential search (DS) [73]. e exploratory and exploitative abilities of the swarm-based method can be a good evolutionary basis to be applied to areas such as computer vision [74], deployment optimization [75], enhancement of the transportation networks [76], optimization of the tasks of deep learning [77][78][79], improvement of the prediction methods [80,81], and decisionmaking technique [82][83][84]. One of the possible methods is the whale optimization algorithm (WOA), which is an intelligent optimization algorithm presented by Mirjalili and Lewis [85] in 2016. e algorithm mainly focuses on the whale's hunting behavior and the prey's predatory behavior to seek the optimal solution to the problem. Although WOA has the advantages of fewer parameters and strong global convergence, the standard WOA still possesses the problems of slow convergence speed and low convergence accuracy.
us, many improved versions can be found in the literature. Wang et al. [86] devised the MOWOA, an opposition-based multiobjective WOA using global grid ranking, using multiple parts to improve optimization performance. In order to solve the problems of poor exploration and local optimal stagnation of WOA, Salgotra et al. [87] proposed an improved WOA based on mechanisms such as oppositionbased learning, exponentially decreasing parameters, and elimination or reinitialization of the worst particles. e improved algorithm has been experimentally demonstrated to have a large improvement in performance. Sun et al. [88] enhanced the WOA with the strategy of quadratic interpolation (QIWOA). e algorithm introduced new parameters to effectively search for solution spaced and handled premature convergence and adopted the quadratic interpolation around the best individual to enhance its exploitation capability and solution precision. Agrawal et al. [89] combined quantum concepts with the WOA, adopting the quantum bit representation of population agents and the quantum rotation gate operator as mutation operators to improve the exploration and exploitation ability of classical WOA. Hussein et al. [90] handled binary optimization problems by using the basic version of WOA and devising two transfer functions (S-shaped and V-shaped) to map the continuous search space into binary ones. Luo et al. [91] integrated three strategies with the original approach to obtain a better balance between exploration and exploitation trends. Firstly, the chaos initialization phase is utilized to start a group of chaos-triggered whales at the initial phase. en, the diversity level of the evolutionary population is enhanced by the Gaussian mutation. Finally, the chaotic local search, combined with the "narrowing" strategy, is utilized to raise the original optimizer's utilization tendency. e effect of the signal is due to the base concepts in chaos theory [59,[92][93][94]. Sun et al. [95] proposed a nonlinear dynamic control parameter updating strategy based on cosine function and integrated this strategy and Lévy flight strategy into WOA (MWOA). Hemasian-Etefagh and Safi-Esfahani et al. [96] introduced a new idea (called GWOA) in whale grouping to overcome the early convergence problem. Elaziz and Mirjalili [97] integrated chaotic mapping and opposition-based learning into the algorithm and used the differential evolution (DE) algorithm to automatically select the chaotic graph and part of the population to alleviate the defects (DEWCO). Guo [98] devised an enhanced WOA by using the strategy of social learning and wavelet mutation. A new linear incremental probability is designed to increase the capability of global search. According to the principle of social learning, an individual's social network is constructed by using social hierarchy and social influence. So as to learn the exchange and sharing of information among groups, they established an adaptive neighborhood learning strategy on the basis of the network relationship.
e Morlet wavelet mutation mechanism is adopted to learn the dynamic adjustment of mutation space, thus enhancing the capability of the algorithm to get rid of the local optimum.
WOA and its improved versions are often used to solve some practical application problems. Revathi et al. [99] devised an optimization scheme with the brainstorm WOA (BS-WOA) to identify the key used to improve the data structure's privacy and practicability by amending the database. Gong et al. [100] used an improved edition of the WOA to determine the optimal features and amend the classification's artificial neural network weights. e model is simulated on FLAIR, T1, and T2 data sets, showing that the presented model has a robust diagnostic capability. e model was then used to diagnose common diseases such as breast cancer, diabetes, and erythema squamous epithelium. Zhang et al. [101] utilized the best convolutional neural network (CNN) to process the skin disease image and adopted the improved WOA to optimize the CNN. Xiong et al. [102] devised an enhanced WOA, called IWOA, to accurately optimize different PV models' parameters, a typical complex nonlinear multivariable strong coupling optimization problem. Petrović et al. [103] analyzed the scheduling problem of a single mobile robot, and the best transportation method of raw materials, goods, and parts in an intelligent manufacturing system was found through WOA. Li et al. [104] used WOA to modify the input weight and hidden layer bias of extreme learning machine (ELM) and used this model to assess the aging of insulated gate bipolar transistor module. Akyol and Alatas [105] adopted WOA for emotional analysis, which is a multiobjective problem. Qiao [106] introduced adaptive search and encircling mechanism, spiral position, and jump behavior to enhance the efficiency of WOA and used the improved algorithm to predict short-term gas consumption. Lévy flight and pattern search were embedded into WOA for parameter estimation of solar cell and photovoltaic system [107].

Complexity
Although WOA has significantly improved its performance and robustness compared with other metaheuristics algorithms, it is still not free from the dilemma of easily falling into local optimal solutions and other problems, and the same phenomenon of low solution accuracy and slow convergence exists when solving function problems. So, this paper proposes an improved variant of WOA, which is named MEWOA. We introduce two strategies: elite strategy and spiral motion from moth flame optimization (MFO) [12,108,109], which significantly strengthens the convergence accuracy and speed of the basic WOA easier to jump out of local optimum. To further verify the performance of MEWOA, the algorithm is also utilized to solve practical engineering problems. e results reveal that MEWOA is superior to the other algorithms with high solution quality and convergence speed.
e main contributions of this study can be summarized as follows: (i) Aiming at overcoming the problems of WOA, we introduce elite strategies as well as spiral motion into WOA to improve the diversity of populations while enhancing optimal solution selection and finally propose an improved WOA (MEWOA) (ii) e MEWOA is compared with some with metaheuristic algorithms and advanced algorithms on function test sets such as CEC2017 and CEC2014, respectively, and satisfactory results are obtained (iv) e proposed MEWOA has achieved excellent results in three typical engineering problems is paper is structured as follows. Section 2 briefly introduces WOA, elite strategy, and MFO. Section 3 describes MEWOA. In Section 4, a range of experiments is conducted based on MEWOA to demonstrate the proposed algorithm's performance. In Section 5, the full content is summarized, and the future research direction is pointed out.

Whale Optimization Algorithm (WOA).
WOA is a new metaheuristic algorithm devised by Mirjalili and Lewis [85] based on the Bubble-net behavior of humpback whales during hunting. In this algorithm, each humpback's position represents a feasible solution. Humpback whales hunt by producing unique bubbles along with a circular or "9" shaped path. According to such a phenomenon, the authors' mathematical models include the following three steps: random search, encircling prey, and attacking prey.

Random Search.
Each agent's position is randomly generated to find prey. Moreover, the specific process is as follows: where X →d rand is the position of the d-th dimension in the randomly selected whale, X →d denotes the position of the current individual in d-th dimension, t means the current number of iterations, the calculation result of D → denotes the distance between the random individual and the current individual, and A → and C → are the coefficients as shown in the following formulas.
where a → is a constant that will linearly lessen from 2 to 0, and r → 1 and r where X →d best (t) reveals the position of the d-th dimension in the best individual so far, X →d denotes the position of the current individual in d-th dimension, and the calculation result of D denotes the distance between the best individual and the current individual.

Attacking Prey.
On the basis of the hunting behavior of humpback whale, it swims in a spiral motion, so the mathematical model of hunting behavior is devised as follows: where D → p � |X →d best (t) − X →d (t)| denotes the distance between the whale and its prey, b is a constant utilized to define the shape of the spiral, and l is a random number in [−1, 1].
As the whale approaches its food in a spiral shape, it also shrinks its encircling circle. erefore, P i is adopted to realize this synchronous behavior model, and Mirjalili sets P i as 0.5 to change the position of the whale between the constricted encircling mechanism or the spiral model. e concrete model is shown as follows: where p is a random number in [−1, 1]. When | A → | < 1 and p < P i , the current position of X →d * (t) means the X →d best (t), and the whale updates the formula of encircling its prey.

Complexity
Otherwise, the agent updates the position by the randomly selected reference whale.

Elite Strategy.
According to the position of the original population X → , we introduce a new population X → 1 , according to the value of fitness Fitness 1 . en, X → and X → 1 are combined to form the population X → 2 sorted by fitness Fitness 2 , and the top N is selected. e pseudocode of elite strategy is shown in Algorithm 1.
We know that the population obtained by random initialization can satisfy the search in the global solution, but such a search is not targeted. If the invalidity of some spatial regions has been proved during the first initialization, it is still possible to search for these useless regions when the random search is performed again, which leads to a waste of resources. e addition of the elite strategy can solve this problem better. While satisfying the global search, the population search will not search the invalid solution space again after the population search again but search again in the space where the optimal solution may exist, which can improve the efficiency of the algorithm to a greater extent. rough the elite strategy, new populations are generated by ranking the original populations according to their fitness values, after which the two are combined and from which the optimal top N populations of individuals are then selected. Doing so is the selection of the optimal individuals each time and ultimately improves the overall population quality.

Moth Flame Optimization (MFO).
MFO is a new swarm intelligence optimization algorithm proposed by Xu et al. [12,108,109]. It is inspired by the unique flight mode of moth named transverse orientation for navigation at night. In this algorithm, the set of moths M can be illustrated as where M �→ ij is the j-th position corresponding to the i-th moth. Assuming the flame set is F → , F → ij is the j-th position corresponding to the i-th flame, and the flame set can be expressed as follows: Each agent updates its position according to the following expression: where M �→ i is the i-th moth, F → i is the i-th flame, and S → is the helix function.
where D → i denotes the linear distance between the i-th moth and the j-th flame; b means the defined helix shape constant; t denotes a random number in the interval [−1, 1].
To help the moth escape from the local optimum, the number of flames will decrease during the iteration: where l denotes the number of the current iteration; N means the maximum quantity of flames; T means the maximum quantity of iterations. e process of MFO is summarized as follows: (1) Initialize the population and calculate the fitness values of the population (2) e fitness values are sorted; calculate the location of the flame and its fitness value (3) Calculate the number of flames according to equation (12) (4) Calculate the linear distance between the moth and the corresponding flame and substitute it into equation (11) to obtain the updated value (5) Calculate the fitness value according to the updated moth population (6) Judge whether the termination condition is met; otherwise, jump to Step 2 e strategies in the MFO give good access to the most optimal population individuals, i.e., the corresponding flame positions. Because the flame position is obtained with respect to the moth population, the flame position is obtained after the fitness value is calculated and ranked for the moth individuals, and with iteration, the flame position is selected only for the better individuals in the moth population. erefore, applying MFO to WOA can effectively enhance the local search capability of the algorithm.

Proposed Method
In this section, the MEWOA is illustrated in detail. e flowchart of the proposed MEWOA is presented in Figure 1. MEWOA incorporates the elite strategy and MFO algorithm for balancing the capability of exploration and exploitation. e algorithm first uses an elite strategy to generate a highquality candidate population. Based on this population, the MFO algorithm is used to form a better population, which can help the algorithm possess the fast convergence, find out the optimal solution, and effectively avoid premature stagnation.

Experimental Studies
In this section, we further verify the performance of MEWOA. Firstly, the combination of strategies and the stability of the algorithm is analyzed. Next, on the CEC 2017 competition data set, we adopt several advanced versions of WOA as a comparison. At last, the algorithm is applied to three practical engineering problems.
e related experiments are conducted under the Windows Server 2012 R2 operating system adopting MATLAB R2014a software, and the hardware platform is configured with Intel (R) Xeon (R) Sliver 4110 CPU (2.10 GHz) and 16 GB RAM.

Benchmark Functions and Performance Evaluation Measures.
is experiment adopts the IEEE CEC 2017 competition data set as a test function, which can effectively estimate the algorithm's ability. To ensure the experiment's fairness, the involved algorithms are evaluated under the same conditions: the overall scale and the maximal iteration numbers are set as 300000 and 150000, respectively. is is to ensure there is no bias and unfair setting that make the tests toward a specific method, as per artificial intelligence works [110][111][112]. e related algorithms are estimated 30 times on each benchmark function independently. Friedman's test [113] is a nonparametric statistical comparison test that can evaluate the experimental results. It is usually utilized to seek the difference between multiple test results and rank all algorithms' average performance to make a statistical comparison and get the ARV value (average ranking value). For the statistical test, paired Wilcoxon signed-rank test [114] is also adopted in this experiment. Wilcoxon signed-rank test can compare the performance between two algorithms. When the p value is less than 0.05, it indicates that the performance of MEWOA is statistically significantly improved compared to another algorithm.

Impacts of Components.
MEWOA is a novel group intelligence algorithm introducing the two mechanisms of the MFO [108] algorithm and Elite Opposition-Based Learning (EOBL) [115] into the basic WOA. To better understand the influence of each mechanism on the performance of the WOA, we compare the model MWOA after the MFO algorithm, the EWOA model after the EOBL mechanism, and the MEWOA model after the two mechanisms which are integrated at the same time to study the impact of each mechanism on the algorithm. In Table 1, "M" represents the MFO algorithm, and "E" represents the EOBL mechanism. Furthermore, "1" indicates that this mechanism is used in the WOA algorithm, and "0" indicates that the corresponding mechanism is not used. Table 2 reveals the test data of the four algorithms in the CEC2017 [116] functions. is experiment is carried out under the same condition. e dimension is set to 30, the number of particles is set to 30, and the maximum number of evaluations is set to 150,000 times. For obtaining the average results, each algorithm is run 30 times independently.
We test the impact of different mechanisms on the algorithm on 30 benchmark functions in CEC2017. Table 2 shows the comparison results of various models. We have listed the average results and standard deviations of different algorithms running on the test function 30 times, and the optimal values are shown in bold. On 30 test functions, the improved algorithm MEWOA has achieved the optimal solutions on most functions. MEWOA has significant advantages compared with MWOA and EWOA. e experimental results reveal that the MFO algorithm and EOBL mechanism added to the WOA algorithm can effectively enhance the performance of the original WOA and enhance the ability to search for optimal solutions.
To further study the improved MEWOA algorithm's performance, we performed the following analytical experiments on the CEC2017 functions. Figure 2 demonstrates the results of the feasibility analysis of MEWOA, where the original WOA algorithm is chosen for comparison. e graph in the first column (a) shows the three-dimensional location distribution of the MEWOA search history. e second column (b) graph reveals the two-dimensional location distribution of the MEWOA search history. e graph in the third column (c) shows the trajectory of MEWOA during the iterative process. e graph in the fourth column (d) shows the average fitness variation over the iterative process. e graph in the fifth column (e) demonstrates the convergence curve of the algorithm. e black dots in Figure 2 Figure 2(c) show that the individuals fluctuate significantly in the first and middle stages and gradually stabilize in the later stages. Both data show that the algorithm can search the whole solution space as much as possible and then determine the region where the optimal solution is located for further exploitation. Figure 2(d) shows that the algorithm's average fitness curve maintains a constant decline throughout the  Complexity iterations. F1, F4, F7, and F26 fall to lower fitness values early in the iteration. It shows that the algorithm exhibits good convergence ability on these functions. In Figure 2(e), it is more evident from the convergence curves of the two algorithms that MEWOA can find solutions with better quality. is paper also analyzes the balance and diversity of these two algorithms on the CEC 2017 functions. Figure 3 demonstrates the results of the balanced analysis of MEWOA and WOA. e red and blue curves in the figure represent the exploration effect and exploitation effect, respectively. e higher the value of the curve, the more dominant the corresponding effect. A third curve is added to visualize the relationship between the two effects more clearly. When the value of the exploration effect is higher than or equal to the exploitation effect, the curve increases. Otherwise, the curve decreases. When the curve decreases to a negative value, it is set to zero. e general algorithm always performs a global search first and then develops the target area locally after the target area is identified. erefore, in the algorithm's balance analysis curve, the exploration curve always starts with a higher value, and MEWOA has no exception. From Figure 3, we can see that the exploration and exploitation curves of both algorithms fluctuate considerably. e exploitation effect occupies most of the time in both. On the selected functions, the exploration phase of MEWOA ends significantly earlier than that of WOA. e exploitation curves have been increasing since then, indicating that MEWOA spends more time exploiting the target area. Figure 4 reveals the change of the algorithm diversity during the optimization process. From the figure, we can   8 Complexity clearly see that the algorithm shows high population diversity at the beginning due to its random initialization. As the iterations progress, the algorithm keeps narrowing the search and reduces the population diversity. From the figure, we can see that the diversity curves of MEWOA and WOA are relatively similar. We know that both elite selection and MFO will make the algorithm converge faster in the early stage, and the population diversity will decline rapidly. However, the encircling mechanism, random search mechanism, and unique update method of WOA fluctuate between global and local during exploration. is success prevents MEWOA from converging too quickly in the early stage.

Scalability Test.
To test the MEWOA algorithm's performance for searching the optimal solution in different dimensions, we conducted the test in two dimensions of 50 and 100 and compared it with six other algorithms. In the experiment, the number of particles is set to 30. e maximum number of evaluations is set to 150,000 times. Each algorithm is independently run 30 times to take the average, and the CEC2017 test function is selected. e related results are demonstrated in Table 3, where AVG denotes the results' average, and STD means the standard deviation. Compared with other algorithms, the data shows that MEWOA has excellent advantages in processing single-mode functions in 50 and 100 dimensions. e improved MEWOA possesses more excellent performance than the other six improved WOA algorithms and also has a powerful ability to search for optimal solutions.

Comparison with Well-Established Methods.
To investigate the improved MEWOA algorithm's performance and advantages in this paper, a comparative test is made with several improved WOA. ese algorithms are very successful improved WOA with excellent search performance. In the test, the dimension of the particles is set to 30, the number of particles is set to 30, the maximum number of evaluations is set to 150,000 times, each algorithm is independently run 30 times to take the average, and the test function selects the CEC2017 test function. Table 4 lists the involved algorithms' comparison results using the average and standard deviation of each algorithm running 30 times on different test functions. e table reveals that the average and standard deviations obtained from the improved MEWOA in this paper are smaller than other comparison algorithms.
We use the Friedman [113] test to rank the algorithms' performance and the Friedman test to find the difference between the results of multiple tests, which are nonparametric statistical comparative tests. e Friedman test ranks the average scores of the involved algorithms and then conducts further statistical comparisons to obtain ARV (average ranking values) from the results. It can be realized from Table 4 that the enhanced algorithm in this paper possesses better performance than other comparison algorithms in test functions except for F22, F27, and F28. Wilcoxon's [114] rank-sum test is also utilized in this paper to test whether MEWOA is superior to the comparison algorithm. As the p value is less than 0.05, we can realize that the MEWOA is significantly better than the comparison algorithm in the current test function. As shown in Table 4, the p value of MEWOA is less than 0.05 on most test functions, so the improved algorithm in this paper is better than other compared algorithms on most test functions.
Convergence speed and convergence accuracy are important indicators for investigating the performance of evolutionary algorithms. We have selected six representative test functions for learning the algorithm's effectiveness and search trends more quickly and clearly, which are shown in Figure 5, namely, F1, F10, F12, F18, F26, and F30. It can be seen that on test functions F1, F12, F18, and F30, the improved algorithm in this paper has not converged to the optimal value after 150,000 evaluations, and the convergence trend is much higher than other comparison algorithms. In all cases, the convergence accuracy of the MEWOA is greater than the peers.

Comparison with Representative Metaheuristic
Algorithms. To better verify the performance of MEWOA, in this section, we will select some representative metaheuristics to compare with MEWOA. Among the algorithms involved in the comparison, there are classical algorithms, such as DE, as well as algorithms with good results proposed in the past years, such as MFO, and new algorithms proposed in recent years, such as SMA. e details are shown as follows.
(i) HHO (ii) SMA (iii) Hunger games search (HGS) [67] (iv) DE (v) MFO (vi) Cuckoo search (CS) [117] (vii) Grasshopper optimization algorithm (GOA) [118] e parameters of the experiments were set approximately the same as the previous experiments. e dimension of particles was set to 30, the number of particles was set to 30, and the maximum number of evaluations was set to 300,000. e test function is IEEE CEC2017. Table 5 lists the experimental results of this experiment. In Table 5, AVG denotes the average value obtained for each algorithm after 30 independent tests on the corresponding function, STD denotes the corresponding standard deviation, and Rank denotes the ranking of the algorithm on each function. In addition, we used the Wilcoxon singed-rank test to calculate the p value on the algorithms, the purpose of which is to obtain whether there is variability in the comparison results of the two algorithms. If the p value is less than 0.05, it means that the comparison between MEWOA and the corresponding algorithm is statistically significant, and vice versa, it means that the results do not have any significance.
ere are 30 functions in the CEC2017 test set, which are divided into 4 categories, among which, F1-F3 are Unimodal functions, F4-F10 are Multimodal functions, F11-F20 are Hybrid functions, and F21-F30 are Composition functions. In Figure 6, we have selected two functions from each class and depicted the convergence curves of MEWOA with other metaheuristic algorithms. On the Unimodal functions, the performance of MEWOA can be ranked in the middle among the listed algorithms; especially, for the F2 and F3 functions, MEWOA can be ranked third among all algorithms, exceeding the classical algorithm DE, so the overall results are still good.
On the Multimodal functions, MEWOA does not perform as well as on the Unimodal function, in terms of both convergence speed and convergence accuracy. However, on F10, its results are still relatively good. e final convergence accuracy can be ranked third. In addition, according to the picture, except for MFO, MEWOA can achieve a good convergence effect in the first half process iteration.
MEWOA has the best results on Hybrid functions, especially on F13, F16, F18, and F19, where MEWOA can be ranked second among all functions. According to the experimental results, it can also be found that the difference between the convergence accuracy of MEWOA and the firstranked algorithm is not very large in the remaining functions.   Finally, in the Composition functions, we can see that the results shown by MEWOA are still good in the convergence graphs of both F22 and F30 functions, especially in the F22 function, which can achieve a better solution than the other algorithms.

Comparison with Advanced Algorithms.
To further verify the performance of MEWOA, this section has selected some advanced algorithms to compare with MEWOA. Among the compared algorithms, champion algorithms, such as LSHADE, are included, as well as improved algorithms on DE, such as SADE, and algorithms with stronger performance, such as HCLPSO. e specific algorithms involved in the comparison are shown as follows.
(i) Adaptive DE with success-history and linear population size reduction (HCLPSO) [119] (ii) Self-adaptive differential evolution (SADE) [120] (iii) Adaptive differential evolution with optional external archive (JADE) [121] (iv) Comprehensive learning particle swarm optimizer (CLPSO) [122] (v) Adaptive DE with success-history and linear popuation size reduction (LSHADE) [123] (vi) LSHADE_cnEpSi (LSHADE_ES) [124]. (vii) Multistrategy enhanced sine cosine algorithm (MSCA) [16] e experimental parameters were set in the same way as in the previous section, and in addition, we have modified   e overall performance of MEWOA is quite good due to the presence of algorithms like HCLPSO, LSHADE, and other algorithms with extremely strong performance inside the algorithms involved in the comparison. From the convergence curves in Figure 7, we can also find that the MEWOA works very well on F1, F2, F11, and F12, in terms of both convergence speed and convergence accuracy, especially for F11, where MEWOA can reach the optimal convergence accuracy at one-third of the iterations, and this performance is also worthy of recognition.
As for the Hybrid functions and the Composition functions, MEWOA's effect is not very good, but in the two functions F21 and F23, its effect is still relatively good. We can see that on these two functions, MEWOA's ranking can be located in the middle and upper. And among some remaining functions, such as F16, F17, F19, F20, and F22, although the ranking of MEWOA is not satisfactory, the difference between them is minimal.

Practical Constraint Modeling Problems.
In this part, we apply the improved algorithm MEWOA in this paper to three engineering constraint problems to demonstrate the MEWOA algorithm's performance in the mathematical constraint modeling problems, including tension/compression spring, welding beam, and I-beam.
e mathematical model's main target value is constructed through penalty functions [164] and automatically discards the infeasible solution through the heuristic algorithm. ere is no need to carry out this solution's infeasibility and usually uses a recursive iteration way with each recursive call generating a new point before finding a feasible solution. So, the model constructed by penalty functions combined with the MEWOA algorithm is utilized to handle three mathematical modeling problems.

Tension-Compression String Design Problem.
e tension/compression spring design aims to obtain the spring's minimum weight [165][166][167]. e model needs to iterate through the MEWOA algorithm to optimize the three design variables, including wire diameter (d), the average coil diameter (D), and the quantity of active coils (N). e mathematical model formula is illustrated as follows: Objective function: subject to Variable ranges: Some scholars used mathematical techniques or metaheuristic techniques to solve the model. He and Wang [168] used PSO to handle tension/compression spring design problems. Coello Coello [169] utilized genetic algorithms to settle the problem, and the final minimum weight is Best score obtained so far 10 10 10 9 Best score obtained so far 10 4 Best score obtained so far 18 Complexity   Best score obtained so far Best score obtained so far 10 5 Best score obtained so far 10 10 Best score obtained so far 10 5 10 9 Best score obtained so far Best score obtained so far 10 10 Best score obtained so far    Table 7, which is smaller than the minimum value obtained by other methods.

Welded Beam Design
Problem. e aim of the welded beam design model [169] is to achieve the lowest welded beam manufacturing cost. e model includes the following four constraint variables: critical buckling load (P c ), shear stress (τ), internal bending stress of the beam (θ), and deflection rate (δ). e steel bar height (t), weld seam thickness (h), steel bar thickness (b), and steel bar length (l) are the direct parameters that affect the manufacturing cost of welded beams. e mathematical model is as follows: Objective f( x → ) min � 1.10471x 2 x 2 1 + 0.04811x 3 x 4 (14.0 + x 2 ) subject to Variable ranges: where

Complexity
Some scholars use mathematical techniques or metaheuristic techniques to solve the model. Kaveh and Khayatazad [171] adopted RO to solve the manufacturing cost of the model. e enhanced HS model IHS [170] was also used to calculate the model's manufacturing cost. Radgsdell and Phillips [172] used Davidon-Fletcher-Powell, Richardson's random method, and the Simplex method to solve the model's minimum manufacturing cost. As shown in Table 8, when the parameters were set to 0.1885, 3.471, 9.11343, and 0.206754, MEWOA obtained the minimum manufacturing cost of the welded beam of 1.720001. It proves that the MEWOA possesses a very good effect in this engineering problem.

I-Beam Design Problem.
We used the MEWOA method to solve the I-beam design problem by optimizing the four parameters, including the I-beam length, two thicknesses, and height, to get the minimum vertical deflection. e mathematical model is as follows: Consider subject to g( x → ) � 2bt w + t w (h − 2t f ) ≤ 0 Variable range: 10 ≤ x 1 ≤ 50, 10 ≤ x 2 ≤ 80, 0.9 ≤ x 3 ≤ 5, 0.9 ≤ x 4 ≤ 5.

(19)
Wang used the ARSM [174] method to solve the model and obtained a minimum vertical deflection of 0.0157. e minimum vertical deflection optimized by the improved method IARSM is 0.0131. Gandomi et al. [175] utilized CS to decrease the minimum vertical deflection to 0.0130747. Cheng and Prayogo [176] used SOS to obtain deflection the value of 0.0130741. Table 9 demonstrates that the I-beam's vertical deflection obtained by the MEWOA algorithm is 0.0130741, and the experimental result is better than other comparison methods.

Conclusions and Future Works
is article presents an enhanced MEWOA by integrating the opposition-based learning mechanism and MFO algorithm to enhance the balance between exploration and exploitation of the original WOA. Firstly, the MEWOA was estimated and compared with the basic algorithms in different dimensions to verify its effectiveness. Moreover, the devised MEWOA was also compared with six metaheuristic algorithms and five advanced algorithms to demonstrate its superiority. e experimental results proved that MEWOA has achieved much better performance than the original one. e convergence accuracy and scalability of MEWOA also have been improved a lot. e MEWOA can also effectively solve practical engineering problems.
e results have shown that MEWOA can achieve a good balance between the exploration and exploitation ability and solve the constraint problem effectively.
In future work, we plan to improve the WOA in a deeper way from the principle of WOA. And the MEWOA can also be extended to a multiobjective version or binary version for other optimization tasks.

Data Availability
e data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest
e authors declare that they have no conflicts of interest.