Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization

In this paper, a novel swarm-based metaheuristic algorithm is proposed, which is called tuna swarm optimization (TSO). The main inspiration for TSO is based on the cooperative foraging behavior of tuna swarm. The work mimics two foraging behaviors of tuna swarm, including spiral foraging and parabolic foraging, for developing an effective metaheuristic algorithm. The performance of TSO is evaluated by comparison with other metaheuristics on a set of benchmark functions and several real engineering problems. Sensitivity, scalability, robustness, and convergence analyses were used and combined with the Wilcoxon rank-sum test and Friedman test. The simulation results show that TSO performs better compared to other comparative algorithms.


Introduction
Real-world optimization problems have become more challenging, which requires more efficient solution methods. Different scholars have studied various approaches to solve these complex and difficult problems from the real world. A part of researchers solve these optimization problems using traditional methods such as quasi-Newton, conjugate gradient, and sequential quadratic programming methods. However, owing to the nonlinear, nonproductivity characteristics of most real-world optimization problems and the involvement of multiple decision variables and complex constraints, these traditional algorithms are difficult to be solved effectively [1,2]. e metaheuristic algorithm has the advantages of not relying on the problem model, not requiring gradient information, having strong search capability and wide applicability, and can achieve a good balance between solution quality and computational cost [3]. erefore, the metaheuristic algorithms have been proposed to solve real-world optimization problems, such as image segmentation [4,5], feature selection [6,7], mission planning [8,9], parameter optimization [10,11], job shop scheduling [12,13], etc.
Metaheuristic algorithms are usually classified into three categories [14]: evolution-based algorithms, physical-based algorithms, and swarm-based algorithms. e evolutionbased algorithm is inspired by the laws of evolution in nature. Genetic algorithm (GA) [15], inspired by Darwin's theory of superiority and inferiority, is a well-known evolution-based algorithm. With the popularity of GA, several other widely used evolution-based algorithms have been proposed, including differential evolution (DE) [16], genetic programming (GP) [17], evolutionary strategies (ES) [18], and evolutionary programming (EP) [19]. In addition, several new evolution-based algorithms have been proposed, such as artificial algae algorithm (AAA) [20], biogeographybased optimization (BBO) [21], and monkey king evolutionary (MKE) [22]. e physical-based algorithms are inspired by various laws of physics. One of the most famous algorithms of this category is simulated annealing (SA) [23]. SA is inspired by the law of thermodynamics in which a material is heated up and then cooled slowly. ere are other physical-based algorithms proposed, including gravitational search algorithm (GSA) [24], nuclear reaction optimization (NRO) [25], water cycle algorithm (WCA) [26], and sine cosine algorithm (SCA) [27]. e swarm-based algorithms are inspired by the social behavior of different species in natural groups. Particle swarm optimization (PSO) [28] and ant colony optimization (ACO) [29] are two typical swarmbased algorithms. PSO and ACO mimic the aggregation behavior of bird colonies and the foraging behavior of ant colonies, respectively. Some other algorithms of this category include: grey wolf optimizer (GWO) [30], monarch butterfly optimization (MBO) [31], elephant herding optimization (EHO) [32], moth search algorithm (MSA) [33], manta ray foraging optimization (MRFO) [34],earthworm optimization algorithm (EOA) [35], etc. With the development of metaheuristics, a type of human-based metaheuristic algorithm is also emerging. ese algorithms are inspired by the characteristics of human activity. Teachinglearning-based optimization (TLBO) [36], inspired by traditional teaching methods, is a typical example of this category among metaheuristic algorithms. Other humanbased metaheuristics include: social evolution and learning optimization (SELO) [37], group teaching optimization algorithm (GTOA) [38], heap-based optimizer (HBO) [39], political optimizer (PO) [40], etc.
ere is a common feature of all these metaheuristic algorithms that rely on exploration and exploitation in the search space to find the optimal solution [41,42]. Exploration means that the algorithm searches for promising regions in a wide search space and exploitation is a further search for the best solution in the promising regions. e balance of the two search behaviors affects the quality of the solution. When exploration dominates, exploitation declines, and vice versa. erefore, it is a big challenge to balance exploration and exploitation for metaheuristics. Although there are constantly new algorithms being developed, the no free lunch (NFL) [43] theory states that no particular algorithm can solve all optimization problems perfectly. e NFL has motivated researchers to develop effective metaheuristic algorithms to solve various fields of optimization problems.
In this paper, a novel swarm-based metaheuristic is presented called tuna swarm optimization (TSO). It is inspired by two types of swarm foraging behavior of tunas. e TSO is evaluated in 23 benchmark functions and 3 engineering design problems. Test results reveal that the method proposed in this paper significantly outperforms those popular and recent metaheuristics. is paper is structured as follows: Section 2 describes the inspiration for TSO and builds the corresponding mathematical model. A benchmark function set and three engineering design problems are employed to check the performance of TSO in Sections 3 and 4, respectively. Section 5 concludes the overall work and provides an outlook for the future.

Tuna Swarm Optimization
2.1. Inspiration. Tuna, scientifically named unnini, is a marine carnivorous fish. ere are many species of tuna, and the size varies greatly. Tuna are top marine predators, feeding on a variety of midwater and surface fish. Tunas are continuous swimmers, and they have a unique and efficient way of swimming (called fishtail shape) in which the body stays rigid while the long, thin tail swings rapidly. Although the single tuna swims very fast, it is still not as fast as the nimble small fish response. erefore, the tuna will use the " group travel " method for predation. ey use their intelligence to find and attack their prey. ese creatures have evolved a variety of effective and intelligent foraging strategies. e first strategy is spiral foraging. When tuna are feeding, they swim by forming a spiral formation to drive their prey into shallow water where they can be attacked more easily. e second strategy is parabolic foraging. Each tuna swims after the previous individual, forming a parabolic shape to enclose its prey.
Tuna successfully forage by the above two methods. In this paper, a new swarm-based metaheuristic optimization algorithm, namely, tuna swarm optimization, is proposed based on modeling these natural foraging behaviors.

Mathematical Model.
In this section, the mathematical model of the proposed algorithm is described in detail.

Initialization.
Similar to most swarm-based metaheuristics, TSO starts the process of optimization by generating initial populations at random uniformly in the search space, where X int i is the i th initial individual, ub and lb are the upper and lower boundaries of the search space, NP is the number of tuna populations, and rand is a uniformly distributed random vector ranging from 0 to 1.

Spiral
Foraging. When sardines, herring, and other small schooling fish encounter predators, the entire school of fish forms a dense formation constantly changing the swimming direction, making it difficult for predators to lock a target. At this time, the tuna group chase the prey by forming a tight spiral formation. Although most of the fish in the school have little sense of direction, when a small group of fish swim firmly in a certain direction, the nearby fish will adjust their direction one after another and finally form a large group with the same goal and start to hunt. In addition to spiraling after their prey, schools of tuna also exchange information with each other. Each tuna follows the previous fish, thus enabling information sharing among neighboring tuna. Based on the above principles, the mathematical formula for the spiral foraging strategy is as follows: 2 Computational Intelligence and Neuroscience β � e bl · cos(2πb), where X t+1 i is the i th individual of the t + 1 iteration, X t best is the current optimal individual (food), α 1 and α 2 are weight coefficients that control the tendency of individuals to move towards the optimal individual and the previous individual, a is a constant used to determine the extent to which the tuna follow the optimal individual and the previous individual in the initial phase, t denotes the number of current iteration, t max is the maximum iterations, and b is a random number uniformly distributed between 0 and 1.
When all tuna forage spirally around the food, they have good exploitation ability for the search space around the food. However, when the optimal individual fails to find food, blindly following the optimal individual to forage is not conducive to group foraging. erefore, we consider generating a random coordinate in the search space as a reference point for spiral search. is facilitates each individual to search a wider space and makes TSO with global exploration ability. e specific mathematical model is described as follows: where X t rand is a randomly generated reference point in the search space.
In particular, metaheuristic algorithms usually perform extensive global exploration in the early stage and then gradually transition to precise local exploitation. erefore, TSO changes the reference points of spiral foraging from random individuals to optimal individuals as the iteration increases. In summary, the final mathematical model of the spiral foraging strategy is as follows:

Parabolic
Foraging. In addition to feeding by forming a spiral formation, tunas also form a parabolic cooperative feeding. Tuna forms a parabolic formation with food as a reference point. In addition, tuna hunt for food by searching around themselves. ese two approaches are performed simultaneously, with the assumption that the selection probability is 50% for both. e specific mathematical model is described as follows: where TF is a random number with a value of 1 or − 1.
Tuna hunt cooperatively through two foraging strategies and then find their prey. For the optimization process of TSO, the population is first randomly generated in the search space. In each iteration, each individual randomly chooses one of the two foraging strategies to execute, or chooses to regenerate the position in the search space according to probability z. e value of parameter z will be discussed in the parameter setting simulation experiments. During the entire optimization process, all individuals of TSO are continuously updated and calculated until the end condition is met, and then the optimal individual and the corresponding fitness value are returned. e TSO pseudocode is shown in Algorithm 1. e detailed process of TSO is shown in Figure 1.
Computational Intelligence and Neuroscience 3

Benchmark Function Set and Compared Algorithms.
In this section, in order to evaluate the performance of the TSO proposed in this paper, a set of well-known benchmark functions are employed for testing. is set of benchmark functions include 7 unimodal functions, 6 multimodal functions, and 10 multimodal functions with fixed dimensions. e unimodal functions F1-F7 have only one global optimal solution and are, therefore, often employed to evaluate the local exploitation capability of an algorithm.
Besides the global optimal solution, the multimodal functions F8-F23 also have multiple local optimal solutions and are, therefore, used to challenge the global exploration capability and local optimal avoidance capability of an algorithm. e mathematical formulas and characteristics of these functions are shown in Table 1. A three-dimensional visualization of these functions is given in Figure 2.
Return the best individual X best and the best fitness value F (X best )

Start
Initialize the random population of tunas  Input: the population size NP and maximum iteration t max Output: the location of food (the best individual) and its fitness value Initialize the random population of tunas X int i (i � 1, 2, . . ., NP) Assign free parameters a and z While (t < t max ) Calculate the fitness values of tunas Update X t best For (each tuna) do Update α 1 , α 2 , p If (rand < z) then Update the position X t+1 i using equation (1) Else if (rand ≥ z) then If (rand < 0.5) then If (t/t max < rand) then Update the position X t+1 i using Equation (7) Else if (t/t max ≥ rand) then Update the position X t+1 i using Equation (2) Else if (rand ≥ 0.5) then Update the position X t+1 i using Equation (9) End for t � t + 1 End while Return the best individual X best and the best fitness value F(X best ).  Computational Intelligence and Neuroscience 0.0003075 15] 0.398 Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience
All algorithms were implemented under MATLAB R2016b on a computer with Windows 10 64 bit Professional and 16 GB RAM. e population size and the maximum number of iterations for all optimizers were set to 50 and 1000, respectively. All results were recorded and compared based on the performance of each optimizer on 30 independent runs. It is well known that the parameter settings of an algorithm have a huge impact on the performance of the algorithm. For the fair comparison, the parameter settings of all compared algorithms are based on the parameters used by the authors of the original article. Table 2 lists the parameters used by each algorithm. Table 3 presents the results of TSO and other comparison algorithms for solving F1-F13 with Dim � 30. In addition, the performance of TSO is also evaluated using test functions with different dimensions, which is beneficial to recognize the ability of TSO to solve high-dimensional functions. Table 4- Table 6 show the results of TSO and other comparison algorithms for processing F1-F13 with dimensions of 100, 500, and 1000.        As shown in the results of the unimodal functions F1-F7 in Table 3-Table 6, TSO achieves the best results in most of the functions, significantly outperforming almost all the comparison algorithms. In addition, TSO outperforms the comparison algorithms still when dealing with high-dimensional problems. On the other hand, the results obtained by TSO are not much fluctuated as the dimensionality increases, which can also be observed in the convergence curves in Figure 3. Specifically, TSO performs best on F1-F5 when Dim � 30. In particular, TSO can consistently obtain the theoretical optimal solution on F1 and F3. In F7, HHO is the best optimizer, and TSO follows the best. e TSO performs poorly for the F6. For high-dimensional functions, TSO and HHO perform in the top 2. TSO gives the most satisfactory results for F1-F4. HHO performs best on F5-F7, with TSO ranking behind it. Overall, TSO performs the best exploitation ability among all the algorithms involved in the test for unimodal functions on different dimensions. e results for solving the multimodal functions F8-F13 in different dimensions for each algorithm are also given in Table 3-Table 6. e analysis shows that TSO performs best in all dimensions when solving F8-F11. e TSO ranks behind the HHO in solving F12 and F13. Notably, TSO can stably obtain the theoretical optimal solution for F9-F11. As the convergence curves show, the TSO performance does not degrade too much as the dimensionality increases, showing the superior performance of TSO in solving highdimensional multimodal functions.

Analysis of TSO for Fixed Dimensional Functions.
e test results of TSO applied to fixed dimensional functions are shown in 7. e means in the table show that TSO is superiorly competitive on the fixed dimensional functions, performing best on eight of the ten functions. TSO ranks second and third on F8 and F15. In order to analyze the distribution characteristics of TSO when solving fixed dimensional functions, box plots of F14-F23 are drawn based on the results of 30 runs, as shown in Figure 4. It can be observed that TSO outperforms the comparison algorithm in most functions in terms of maximum, minimum and median values, and the distribution of solutions is more concentrated, thus, TSO performs better compared to other algorithms.

Wall-Clock Time Analysis of TSO.
Computational efficiency is also an important measure of algorithm performance. Table 8 records the average computational time consumed by these algorithms for 30 independent runs in each function. It can be seen that the computation of TSO does not take much time, only longer than WOA and TSA. Although TSO takes more time, the performance is better than WOA and TSA. Moreover, TSO takes less time with better performance than other comparison algorithms, thus TSO has a huge efficiency advantage. Figure 5 illustrates the ranking of computational time consumption of each algorithm, and it can be visually seen that WOA, TSA, and TSO rank in the top three.

Parameter Sensitivity Analysis.
is section focuses on the analysis of the values of the two control parameters (z and a) of the TSO. e first parameter is z, which controls the probability of randomly generated individuals. e second parameter is a, which controls the extent to which each individual follows the optimal individual and the neighboring individuals. e 13 variable dimensional functions (F1-F13) and 10 fixed dimensional functions (F14-F23) are used for analyzing the effect of the values of  Dim=30 Dim=100 Dim=30 Dim=100 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=30 Dim=100 Dim=30 Dim=100 Dim=30 Dim=100 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=30 Dim=100 Dim=30 Dim=100 Dim=30 Dim=100 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=30 Dim=100 Dim=500 Dim=1000 Dim=500 Dim=1000 Dim=30 Dim=100 Dim=30 Dim=100 Dim=500 Dim=1000 Dim=500 Dim=1000    According to the results, the Friedman test results of unimodal functions F1-F7, multimodal functions F8-F23, and all functions F1-F23 are given respectively, as shown in Table 9-Table 11. From the results in Table 9, it is clear that the smaller the value of z taken, the better the TSO performance. e larger the value of a taken, the better the results obtained by TSO. is is because the smaller the value ofzis, the smaller the probability of randomly generating new individuals, while the larger the value of a is, the higher the degree to which each individual follows the optimal individual, all of which are beneficial for improving exploitation ability and accelerating convergence. For the multipeaked functions F14-F23, we can get almost the opposite conclusion from Table 10 compared to unimodal  functions. e rankings considering the results of all functions are given in Table 11. e results show that TSO has the best performance when z � 0.05, a � 0.7.

Statistical Analysis of TSO.
is section further analyses the differences between TSO and other algorithms statistically using the Wilcoxon rank-sum test and Friedman test.
e Wilcoxon rank-sum test is a paired test that checks for significant differences between two algorithms. e results of the test between TSO and each algorithm at significance level α � 0.05 are given in Table 12-Table 16, where the symbols "+/ � /-" indicate that TSO performs better, similar, or worse than the comparison algorithm. Table 17 gives the statistical results of TSO in different dimensions and functions that are better than, similar to, and worse than the comparison algorithm. TSO outperforms other comparative algorithms in different cases and achieves results of 32/15/15, 42/13/7, 62/ 0/0, 61/1/0, 61/1/0, 51/6/5,and 55/7/0, confirming the significant superiority of MSMA in most cases compared to other algorithms. Table 18 shows the statistics of F1-F13 in different dimensions and the fixed dimensional functions F14-F23. e statistics show that TSO ranks first in all cases. erefore, it can be considered that TSO has the best performance compared to other algorithms.

TSO for Engineering Design Problems
is section uses three engineering design problems to assess TSO's ability to solve real-world problems. ese problems include the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem. TSO uses the same number of iterations (1000) and populations (50) in solving these engineering design problems. Each problem is run 30 times independently, and the statistical results are compared with other algorithms in the literature.

Pressure Vessel Design.
e pressure vessel design problem shown in Figure 6 is a well-known benchmark test design problem with the goal of reducing total cost, including forming cost, material cost, and welding cost. ere are four different variables: vessel thickness Ts (x 1 ), head thickness (x 2 ), inner diameter R (x 3 ), and vessel cylindrical cross-section length L (x 4 ). e problem is described as follows: Subject to Variable ranges: 1 × 0.0625 ≤ x 1 , e results of TSO for solving this problem are compared with other algorithms such as DDSCA, ISCA, MBA, CPSO, TEO, hHHO-SCA, HPSO, MVO, and AFA, and the comparison is shown in Table 19. e results show that the TSO solution is superior to the solutions provided by the comparison algorithms with optimal solutions for each parameter [0.7782, 0.3846, 40.3196, and 199.9999], corresponding to a minimum cost of 5885.3327.

Tension/Compression Spring Design.
e tension/compression spring design problem is a mechanical engineering design optimization problem. As shown in Figure 7, the goal             Figure 6: Schematic of the pressure vessel design problem ( Figure 6 is reproduced from [49]).
Computational Intelligence and Neuroscience of this problem is to reduce the weight of the spring. It includes four nonlinear inequalities and three continuous variables: wire diameter w(x 1 ), average coil diameter d(x 2 ), and coil length or number L(x 3 ). is problem can be described by the following equation: Subject to e solution of TSO is compared with other methods given in the literature, including GA3, CPSO, CDE, DDSCA, GSA, hHHO-SCA, AEO, and MVO. Table 20 shows the parameters and costs corresponding to the optimal solution of each algorithm. As can be seen from Table 10, TSO is the best algorithm for solving the problem. e optimal solution for each parameter corresponding to the lowest cost of 1.724852 is [0.205729, 3.470488, 9.036623, 0.205729].

Conclusions
is work presents a novel swarm-based metaheuristic algorithm: tuna swarm optimization. e algorithm is inspired by the cooperative foraging mechanisms of tuna, including spiral foraging and parabolic foraging. e method has few adjustable parameters and can be implemented easily. TSO was comprehensively evaluated using a set of benchmark functions in different dimensions and was compared with other state-of-the-art algorithms. e results show that TSO is superior to the comparative algorithms. In addition, the pressure vessel design problem, the tension/compression spring design problem, and the welded beam design problem are investigated. e statistical results show that TSO has a high potential for solving real-world optimization problems compared to the reported methods. A major factor in TSO's success is the balance of exploitation and exploration achieved through the two foraging strategies. Meanwhile, fewer iterative steps bring less time costs, which is one of the strengths of TSO. However, while TSO performs excellently in most functions, there is still potential for enhancement regarding the small percentage of functions. is can be done by further enhancing TSO's ability to get rid of local optimum, using methods such as hybridisation of algorithms, adaptive parameters, etc.
For future work, binary and multiobjective versions of TSO can be developed for discrete problems and multiobjective optimization problems. Moreover, TSO will be applied to solve UAV mission planning problems such as trajectory planning problems, target allocation problems, etc. A further interesting direction would be to investigate the performance of different constraint handling methods in solving constrained optimization problems.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Authors' Contributions
Andi Tang and Lei Xie made a major contribution to this work by presenting the conception and code.