A New Optimization Algorithm Based on Search and Rescue Operations

In this paper, a new optimization algorithm called the search and rescue optimization algorithm (SAR) is proposed for solving single-objective continuous optimization problems. SAR is inspired by the explorations carried out by humans during search and rescue operations.'e performance of SAR was evaluated on fifty-five optimization functions including a set of classic benchmark functions and a set of modern CEC 2013 benchmark functions from the literature. 'e obtained results were compared with twelve optimization algorithms including well-known optimization algorithms, recent variants of GA, DE, CMA-ES, and PSO, and recent metaheuristic algorithms. 'eWilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. 'e statistical results indicated SAR is highly competitive with the compared algorithms. Also, in order to evaluate the application of SAR on real-world optimization problems, it was applied to three engineering design problems, and the results revealed that SAR is able to find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. 'us, the proposed algorithm can be considered an efficient optimization method for realworld optimization problems.


Introduction
In our world, there are many optimization problems for which different optimization algorithms are used. ese algorithms can be classified into deterministic and stochastic optimization algorithms. e deterministic algorithms always produce the same outputs for particular inputs. ese algorithms often are used as local search algorithms. Unlike deterministic algorithms, stochastic algorithms have random components and produce different outputs for particular inputs. Many metaheuristic algorithms implement some form of stochastic optimization algorithms [1]. In recent decades, many metaheuristic algorithms have been proposed to solve optimization problems. e genetic algorithm (GA) [2,3], particle swarm optimization (PSO) [4,5], and ant colony optimization (ACO) [6,7] are some of the most widely used metaheuristic algorithms. Some features of these algorithms include simple implementation, flexibility, and capability for finding the local optimum. Most metaheuristic algorithms are inspired by physical or natural phenomena, i.e., animals' movement to find food sources. Consequently, these algorithms are easily understandable and reproducible as software programs for various optimization problems. ese algorithms are able to find optimal solutions regardless of the physical nature of the problem. Unlike other optimization methods, metaheuristic algorithms can find global optimal solutions for the problems where there are many local solutions due to their random nature. ese reasons have led to extensive use of such algorithms in solving various optimization problems.
In recent years, researchers have carried out extensive studies on metaheuristic algorithms such as harmony search (HS) [8,9], artificial bee colony (ABC) [10,11], cuckoo search (CS) [12], imperialist competitive algorithm (ICA) [13], teaching-learning-based optimization (TLBO) [14], backtracking search optimization algorithm (BSA) [15], firefly algorithm (FA) [16], Yin-Yang-pair optimization (YYPO) [17], and squirrel search algorithm (SSA) [18]. Besides, many metaheuristic algorithms have been enhanced to solve real-world optimization problems such as a decomposition-based multiobjective firefly algorithm developed for RFID network planning [19] and a novel diffusion particle swarm optimization proposed for optimizing sink placement [20]. Based on the "no free lunch" theorem (NFL) [21,22], there is no optimization algorithm that works well on all optimization problems. An optimization algorithm may achieve very good results on a set of optimization problems, while it is not suitable for others. erefore, various metaheuristic algorithms might be good for a series of optimization problems, but not for others. e metaheuristic algorithms can be categorized according to their nature into different groups such as evolution-based, swarm-based, physics-based, and humanbased algorithms.
(i) Evolution-based algorithms are developed, based on evolution techniques. e GA, biogeography-based optimizer (BBO) [23,24], and differential evolution (DE) algorithm [25,26] are examples of this group of metaheuristic algorithms. For example, the genetic algorithm is inspired by evolution theory. (ii) In nature, many living beings live socially and search for a variety of goals such as hunting and finding food sources as groups. ey use different strategies for searching [27]. Some metaheuristic algorithms solve the optimization problems through modelling the social behavior of living organisms in nature. ese types of metaheuristic algorithms are called population-based swarm intelligence (SI) or swarm-based algorithms. Algorithms such as PSO, ABC, ACO, FA, CS, krill herd (KH) [28], simplified dolphin echolocation (SDE) [29], and grey wolf optimizer (GWO) [30] are categorized in this group. For example, PSO has been inspired by movement of organisms in a bird flock or fish collection to search for food sources. (iii) Physics-based algorithms are inspired by physical phenomena. Algorithms like Big Bang-Big Crunch (BB-BC) [31], colliding bodies optimization (CBO) [32], gravitational search algorithm (GSA) [33,34], star graph [35], water wave optimization (WWO) [36], and ray optimization [37] are located in this group. For example, WWO is inspired by refraction and breaking rules of water surface waves. (iv) Human-based algorithms are algorithms that are based on human behavior like tabu search (TS) [38,39], human mental search (HMS) [40], ICA, and TLBO algorithms. For example, TS and TLBO algorithms have been inspired by human memory function and the way of human learning and training method in the classroom, respectively.
In this paper, a new metaheuristic algorithm is introduced which is based on how to search in search and rescue operations. Humans' search methods have evolved over thousands of years, and there are not any algorithms that used humans' behaviors during this type of search for solving optimization problems. So they encouraged authors to propose a new metaheuristic for solving optimization problems based on these features. is algorithm is categorized as a human-based algorithm. is article is organized as follows: after the introduction in Section 1, Section 2 gives an introduction to search and rescue operations. Section 3 presents the proposed algorithm. In Section 4, comparative tests and benchmarking functions for comparing algorithms are introduced. Section 5 presents the results and discussions, and finally in Section 6, conclusions of this paper are presented.

Search and Rescue Operations
Like other living creatures, human beings search for different purposes as groups. Search can be done for a variety of goals such as hunting, finding food sources, or finding lost people. One type of group searches is "search and rescue operations." Search is a systematic operation using available personnel and facilities to locate persons in distress. Rescue is an operation to retrieve persons in distress and deliver them to a safe place [41]. One of the world's earliest search and rescue efforts ensued following the 1656 wreck of the Dutch merchant ship Vergulde Draeck off the west coast of Australia [42].
Search and rescue operations are divided into several types such as mountain rescue, ground search and rescue, urban search and rescue, air-sea rescue, and combat search [43]. In the United States, institutions such as the American Society for Testing and Materials (ASTM) and the National Fire Protection Association (NFPA) provide codes for search and rescue operations [44]. Search and rescue operations are sometimes done to find specific people who are lost. Codes also provide some requirements for searches that increase the chances of successful finding. In the following, the procedure of finding lost people is described considering the main concepts in this operation. Humans can identify the clues and traces of lost people based on the received training. e found clues have different values relative to each other and provide different information about lost people. For example, some clues indicate the likelihood of the presence of lost people in that location. Each group member evaluates clues based on his/her training and delivers this information on found clues through communication equipment to other members. Finally, they can search based on the importance degree of these clues and information that can be obtained from them. Typically, group members search around the clues or seek in directions created by connecting the clues [45]. erefore, human searches, in search and rescue operations, are divided into two phases: social and individual. In the social phase, the group members search based on the position of found clues and their quality in areas that are 2 Mathematical Problems in Engineering likely to get better clues. In the individual search phase, the searching is done regardless of the position and amount of clues found by others. Clues can be divided into the following two categories: (1) Hold clue: one member of the search group is present and searches around it. (2) Abandoned clue: group members have found the clue and there is no one in that position. In other words, the human who found the clue has left it to find better clues, but the information about that clue is available for group members.
In Figure 1, points A and B are the locations of a group member (human) and a clue, respectively. Path 1, Path 2, and Path 3 are three assumed paths that the lost person has likely passed through. e arrows show the movement directions. In the social phase, the human at position A selects the search direction based on the position of clue B. Since searching around better clues increases the probability of finding the lost person, the area that has better clues in the direction of AB will be selected. In other words, if there are better clues in area 1 compared to area 2, area 1 is chosen; otherwise, area 2 is selected to keep searching. In the sample case depicted in Figure 1, both Path 1 and Path 2 pass through points A and B. If the lost person has passed Path 1 or Path 2, this simple strategy will increase the chances of finding better clues in the social phase. In the individual phase, the human at point A searches around the best clue that is found. is search is done in an area, let us say area 3. If the lost person has passed Path 3, the probability to find him/her is higher during the individual phase compared to the social phase. When a new location is searched by one of these two phases, and the location has better clues than in the previous location (position A), the said location becomes the new position of the group member.

A Search and Rescue Optimization Algorithm Proposal
In this section, the mathematical model of the proposed algorithm for solving a "maximization problem" is described. In SAR, the humans' positions are equal to the solutions of the optimization problem, and the amount of clues found in these positions represents the objective function for these solutions. e flowchart of SAR is shown in Figure 2.

Clues.
e group members gather clue information during the search. ey left some clues whenever they found better clues in other positions, but information on them are used to improve searching operations. In the model we proposed, the left clues' positions are stored in the memory matrix (matrix M), whereas the humans' positions are stored in a position matrix (matrix X). e dimensions of the matrix M are equal to those of the matrix X. ey are N × D matrices, where D is the dimension of the problem and N is the number of humans. e clues matrix (matrix C) is a matrix containing the positions of found clues. is matrix consists of two matrices X and M. Equation (1) shows how to create C. All new solutions in social and individual phases are created based on the clues matrix, and it is an important part of SAR. e matrices X, M, and C are updated in each human search phase: where M and X are memory and humans' position matrices, respectively, and X N1 is the position of the 1 st dimension for the N th human. Also, M 1D is the position of the D th dimension for the 1 st memory. e two phases of human searches, including the "social phase" and "individual phase," are modelled as follows.

Social Phase.
Considering the explanations given in the previous section, and taking into account a random clue among found clues, the search direction is obtained using the following equation: where X i , C k , and SD i are the position of the i th human, the position of the k th clue, and the search direction of the i th human, respectively. k is a random integer number ranging between 1 and 2N and chosen in a way that k ≠ i. It is important to point out that humans normally search in such a way that all desired areas are searched and any repeated location is not searched again. erefore, the search should be done in a manner that movement of the group members toward each other is limited. To this end, all dimensions of X i should not be changed by moving in the direction of equation (2). To apply this constraint, the binomial crossover operator has been used. Also as explained in the previous section, if the considered clue is better than the clue related to the current position (the objective function value for solution B is greater than the objective function value for solution A in Figure 1), an area around SD i direction and around the position of that clue is searched (area 1 in Figure 1); otherwise, the search will continue around the current location along the SD i direction (area 2 in Figure 1). Finally, the following equation is used for the social phase: Mathematical Problems in Engineering Generate a new solution using equation (3) Generate a new solution using equation (4) Boundary control using equation (5) Boundary control using equation (5) Is the new solution better then the previous one?
Is the new solution better then the previous one?
Update USN i using equation (8) Update USN i using equation (8) Reject the new solution Update memory matrix using equation (6) Update human position matrix using equation (7) Update memory matrix using equation (6) Update human position matrix using equation (7) Individual phase Abandon clues Initialization Social phase Initialize 2N solutions randomly, create human position matrix using the N best solutions, create memory matrix using the other solutions, and find optimal solution Replace the current solution with a new solution generated using equation (9) Reject the new solution  where X i,j ′ is the new position of the j th dimension for the i th human; C k,j is the position of the j th dimension for the k th found clue; f(C k ) and f(X i ) are the objective function values for the solutions C k and X i , respectively; r1 is a random number with a uniform distribution in the range [− 1, 1]; r2 is a uniformly distributed random number in the range [0, 1] and is different for each dimension, but r1 is fixed for all dimensions; j rand is a random integer number ranging between 1 and D which ensures that at least one dimension of X i,j ′ is different from X i,j ; and SE is an algorithm parameter ranging between 0 and 1. Equation (3) is used to obtain a new position of the i th human in all dimensions.

Individual Phase.
In the individual phase, humans search around their current position, and the idea of connecting different clues used in the social phase is applied to search. Contrary to the social phase, all dimensions of X i change in the individual phase. e new position of the i th human is obtained by the following equation: where k and m are random integer numbers ranging between 1 and 2N. To prevent movement along with other clues, k and m are chosen in such a way that i ≠ k ≠ m. r3 is a random number with a uniform distribution ranging between 0 and 1.

Boundary Control.
In all metaheuristic algorithms, all solutions should be located in the solution space, and if they are out of the allowable solution space, they should be modified. So if the new position of a human is out of the solution space, the following equation is used to modify the new position: where X max j and X min j are the values of the maximum and minimum threshold for the j th dimension, respectively.

Updating Information and Positions.
In each iteration, the group members will search according to these two phases, and after each phase, if the value of objective function in position X i ′ (f(X i ′ )) is greater than the previous one (f(X i )), the previous position (X i ) will be stored in a random position of the memory matrix (M) using equation (6) and this position will be accepted as a new position using equation (7). Otherwise, this position is left and the memory is not updated: where M n is the position of the n th stored clue in the memory matrix and n is a random integer number ranging between 1 and N. Using this type of memory updating increases the diversity of the algorithm and the ability of the algorithm to find the global optimum as well.
3.6. Abandoning Clues. In search and rescue operations, time is a very important factor because the lost people may be injured and the delay of search and rescue teams may result in their deaths. erefore, these operations must be done in such a way that the largest space is searched in the shortest possible time. So if a human cannot find better clues after a certain number of searches around his/her current position, he/she leaves the current position and goes to a new position. To model this behavior, at first, unsuccessful search number (USN) is set to 0 for each human being. Whenever a human finds better clues in the first or second phase of the search, the USN is set to 0 for that human; otherwise, it will increase by 1 point as presented in the following equation: where USN i indicates the number of times the human i has not been able to find better clues. When the USN for a human is greater than the maximum unsuccessful search number (MU), he/she goes to a random position in the search space using equation (9), and the USN i is set to 0 for that human: where r4 is a random number with a uniform distribution ranging between 0 and 1. It is different for each dimension. increase in searches around one clue and a reduction of the chances of searching in other regions. e MU directly relates to the dimension of the problem. As the search space increases, the maximum number of unsuccessful searches is increased, too.

Control
For all the following tests, the value of the SE was set to 0.05 and the value of the MU was obtained by equation (10). e analysis of SAR parameters has shown that these values for the SE and MU are suitable for solving single-objective continuous optimization problems.

Pseudocode of Search and Rescue Optimization Algorithm (SAR).
e pseudocode of this algorithm is presented in Algorithm 1 for solving a maximization problem. Position sorting is performed only once before the iterations begin.

Conceptual Comparison of SAR with Other Metaheuristic
Algorithms. In each iteration of SAR, two new solutions are generated in two phases for each algorithm particle. e concepts of searching in these phases are different. In the social phase, new solutions are generated based on locations and objective functions of other solutions. In this phase, each particle can move toward the other particles. But in the individual phase, new solutions are generated around current solutions and each particle does not move toward the other particles. In this algorithm, only a new solution which is better than the current solution is accepted and the current solution is stored in an archive (memory). e archive is used to generate new solutions.
SAR such as PSO, ABC, DE, GSA, and TLBO is a population-based optimization algorithm. Similarity and difference of SAR and these algorithms are explained as follows.

SAR versus PSO
Similarity. Both of them utilize social and individual information and memory to generate new solutions. Difference. In PSO, a combination of the global best position (social information) and local best position (individual information) is used to generate a new solution. But SAR separately uses social and individual information to generate two new solutions. Besides, the global best solution is not considered by SAR in these phases. PSO accepts all new solutions, but SAR only accepts new solutions which are better than current solutions. Unlike PSO, SAR leaves unimproved solutions. Also, SAR considers objective function values to generate new solutions in the social phase. Besides, the memory update mechanisms of these methods are different.

SAR versus ABC
Similarity. To produce a new solution, objective function values are considered by both of these algorithms. ey only accept a new solution which is better than the current solution. Both of them leave unimproved solutions. Difference. ABC has not any kinds of memories. It selects solutions by the roulette wheel mechanism and generates new solutions by changing only one dimension of the selected solutions. But SAR selects clues randomly and uses them to generate new solutions by changing some or all dimensions of current solutions. So they use different strategies to generate new solutions, and SAR produces two new solutions for each agent in each iteration.

SAR versus DE
Similarity. ey accept only new solutions which are better than current solutions. e crossover mechanism used in the social phase of SAR is similar to the crossover mechanism of DE. Difference. DE does not consider the previous solutions to generate new solutions, and it is a memoryless algorithm. Although both of these algorithms utilize the same crossover mechanism, the equations applied to generate new solutions are different. DE does not leave unimproved solutions and does not consider objective functions values to produce new solutions. Also, DE produces only a new solution for each agent in each iteration.

SAR versus TLBO
Similarity. Both of them include two searching phases and consider values of objective function to generate new solutions. ey accept only new solutions which are better than current solutions. Difference.
e previous solutions are not used by TLBO, and it is a memoryless algorithm. Unlike this algorithm, SAR leaves unimproved solutions after a certain number of unsuccessful objective function evaluations. SAR and TLBO obtain new solutions using entirely different strategies.
Furthermore, in comparison with the above algorithms, the boundary control strategy of SAR is different.

Computational Complexity.
In this section, the computational complexity of SAR is discussed. e population initialization process requires O(2 × n × d) times, where n and d indicate the number of humans and the dimension of the problem. e proposed algorithm requires O(2n · log (2n)) times to sort population in the initialization phase. e complexity of the social and individual phases is O(n × d) times in the worst case. e complexity of the abandon clue process is O(n) and O(n × d) in the best and the worst case. us, the computational complexity of SAR is as follows: (1) Begin: (2) Randomly initialize a population of 2N solutions uniformly distributed in the range [X min j , X max j ], j � 1, . . ., D (3) Sort the solutions in the decreasing order and find the best position (Xbest) (4) Use the first half of the sorted solutions for human position matrix (X) and the others for memory matrix (M) (5) Define the algorithm parameters (SE, MU) and set USN i � 0 where i � 1, . . ., N (6) While stop criterion is not satisfied do (7) For i � 1 to N do Social phase End for (31) USN i � 0 (32) End If (33) End for (34) Find the current best position and update Xbest where Maxit is the maximum number of iterations. According to the above discussion, the total computational complexity of SAR is O(Maxit × n × d).

Numerical Results
To evaluate the performance of SAR, four tests were considered. Classic benchmark functions were used in test 1 and test 2, while modern benchmarks have been used in test 3. Finally, some structural engineering design problems were utilized in test 4. All runs were executed on a 64 bit computer with 32 GB of RAM having an Intel i7 (3.4 GHz) CPU running Windows 10.

Test 1.
As discussed in Introduction, metaheuristic algorithms are inherently divided into four groups. One wellknown algorithm was selected from each group to compare with the proposed algorithm. Artificial bee colony (ABC), gravitational search algorithm (GSA), differential evolution (DE), and teaching-learning-based optimization (TLBO) algorithms were considered swarm-based, physics-based, evolution-based, and human-based algorithms, respectively. e population size and control parameters of these algorithms are given in Table 1, as suggested in [46], [33], [25], and [47]. TLBO has no control parameter.
In the first test, the performance of SAR was compared with that of ABC, GSA, DE, and TLBO on 27 benchmark functions. ese functions used by various researchers [16,[48][49][50] are presented in Table 2. In this table, type, D, range, and f min represent the type of the benchmark function, the dimensions of the problem, the range of variations, and the optimal value of the function, respectively. Also, X max j and X min j are the maximum and minimum threshold values of the dimension j, respectively. In the type column of this table, U, M, S, and N refer to unimodal, multimodal, separable, and nonseparable functions, respectively. 13 benchmark functions are unimodal, while 14 functions are multimodal. Moreover, there are 11 separable functions and 16 nonseparable functions. Since in some of these classical functions the locations of global minima are symmetrical, some algorithms may be affected by this feature and show a different and unrealistic performance. erefore, the locations of symmetrical global minima are shifted using function T defined in the last row of Table 2. is function generates nonsymmetrical numbers.
In test 1, the maximum number of function evaluations (NFE) was set to 4 × 10 3 × D for all algorithms, in which D is the number of dimensions of the benchmark function that is specified in Table 2. All algorithms were independently executed 51 times. e algorithms were stopped when the number of evaluations for the objective function exceeded the NFE (4 × 10 3 × D) or when the least error (distance between the objective function of the best found solution and the objective function of the global optimum solution) was less than 10 − 8 .

Test 2.
e algorithms and benchmark functions considered for this test are similar to those in the first test. e difference between the first and second tests is related to NFEs. For all of these benchmark functions (except for f13), this value is equal to 2 × 10 4 × D which is 5 times greater than the value considered in the first test. For f13, the NFE is set to 3 × 10 4 × D. e purpose of the second test is to examine the ability of the algorithms to find global minima. erefore, a high value of the NFE is considered.
As in the first test, all the algorithms were independently run 51 times, and the algorithms were stopped when the error values (distance between the objective function value of the best found solution and the objective function value of the global optimum solution) for the global solution were less than 10 − 8 or the number of function evaluations reached its maximum. e population and parameters of the algorithms were set the same as those in test 1.

Test 3. In this test, 28 benchmark functions of CEC 2013
Competition on Single-Objective Real-Parameter Numerical Optimization are used to compare SAR with the state-of-theart optimization algorithms. All of these functions are minimization problems. e details of CEC 2013 benchmark functions can be found in [51]. ese functions cover various types of optimization problems. ese functions are divided into three classes: unimodal (C1-C5), basic multimodal (C6-C20), and composition (C21-C28) benchmark functions. e composition functions are created by combinations of different basic functions. Consequently, they are multimodal, nonseparable, and asymmetrical. e algorithm was independently run 51 times, and it stops when the number of function evaluations reached 100,000 or when the error values from the global optima were less than 10 − 8 . e control parameters of SAR were the same as those in the previous two tests.
e problem dimension was 10, and the variables ranged within [− 100, 100]. To verify the performance of SAR on these problems, it was compared with nine optimization algorithms. ese algorithms which were verified on CEC 2013 benchmark functions are listed as follows: (i) Artificial bee colony (ABC) (ii) A CMA-ES super-fit scheme for the resampled inheritance search (CMA-RIS) [52] [17] (vi) Reflected adaptive differential evolution with two external archives (RJADE) [54] (vii) Self-adaptive differential evolution (SaDE) [55] (viii) Self-adaptive heterogeneous PSO (f k -PSO) [56] (ix) Standard Particle Swarm Optimisation 2011 (SPSO) [57] ese algorithms include a CMA variant (CMA-RIS), a GA variant (AMopGA), two DE variants (SaDE and RJADE), two PSO variants (f k -PSO and SPSO), two recent metaheuristic algorithms (GWO and YYPO), and ABC. Different studies show that the performances of these variants are better than that of the basic version of them. e results of ABC, GWO, and YYPO are reported in [17]. For Table 2: Benchmark functions used in tests 1 and 2.

Test 4.
In order to evaluate the performance of SAR for solving real-world engineering optimization problems, three engineering design problems were utilized. e penalty function approach was employed to handle the constraints of these problems. For all the engineering optimization problems, the control parameters of SAR were set the same as those in the previous tests and the population size of SAR was equal to 10. SAR was independently run 50 times. ese engineering problems are introduced as follows.

I-Beam Design.
e I-beam design problem was firstly proposed by Gold and Krishnamurty [58]. e objective is to find the minimum vertical deflection of an I-beam. e vertical deflection of an I-beam defined by equation (12) is dependent on design load (P), length of the beam (L), and modulus of elasticity (E): P, L, and E are 600 kN, 200 cm, and 20000 kN/cm 2 . is problem has four variables and two constraints including the cross-sectional area (constraint g 1 ) and the bending stress of the beam (constraint g 2 ). e maximum cross-sectional area is 300 cm 2 , and the allowable bending stress of the beam is 56 kN/cm 2 . e mathematical model of the problem is expressed as follows: with bounds 10 ≤ h ≤ 80, 10 ≤ b ≤ 50, 0.9 ≤ t f ≤ 5, and 0.9 ≤ t w ≤ 5.

Cantilever Beam Design.
e cantilever beam design problem was firstly presented by Fleury and Braibant [59]. It consists of five hollow square cross sections. e thickness of these sections is fixed, and the height of them is a design variable. A vertical force is applied at the free end of the beam, and another end of the beam is a rigid support. e cantilever beam design problem has five variables and one constraint. e goal is to minimize the weight of the beam. is problem can be stated as follows: with bounds 0.01 ≤ x i ≤ 100, i � 1, 2, . . . , 5.

Spatial 25-Bar Truss Structure
Design. e spatial 25bar truss design was widely used in structural design optimization. Many optimization methods were applied to solve this well-known optimization problem. e elastic modulus and the material density of all members are 10 4 ksi and 0.1 lb/ in 3 , respectively. e minimum and maximum cross-sectional areas of them are 0.01 in 2 and 3.4 in 2 , respectively.
is truss is subjected to the two loading conditions presented in Table 3.
Because of the symmetry of the structure, the 25 members of the truss are divided into 8 groups, as follows: e displacements of the nodes in both directions are limited to ± 0.35 in, and the allowable stress for each group is shown in Table 4.

Performance of SAR on Classic Benchmark Functions.
e means and variances of errors (distance between the minimum value of found objective functions and the optimal value of the function) obtained by the algorithms for the first test are shown in Table 5. Errors less than 10 − 8 are considered 0.
e Wilcoxon signed-rank test was used to compare the pair algorithms, and the results of 51 runs for SAR were compared with those of the other algorithms [60]. In the Wilcoxon signed-rank test, the superiority of the two algorithms is seen using the hypothesis test. Two hypotheses are defined as null hypothesis (H0) and alternative hypothesis (H1). e null hypothesis indicates that there is no difference between two algorithms, and the alternative hypothesis indicates that there is a difference between two algorithms. To determine whether these algorithms have any superiority to each other, the p value is used. e smaller the p value is, the more likely these two algorithms are different (alternative hypothesis). To determine the level of hypothesis through the level of significance, α has been used. e Wilcoxon signed-rank test results for the first test are shown in Table 6. In this paper, α is equal to 0.05. If the p value is less than 0.05, then these two algorithms are statistically different with the 95% confidence level. When h is 0, it means that there is no difference between the two   5 11.59 40 A 6 -A 9 17.305 40 A 10 -A 11 35.092 40 A 12 -A 13 35.092 40 A 14 -A 17 6.759 40 A 18 -A 21 6.959 40 A 22 -A 25 11.082 40  According to the data in Table 5, SAR and ABC found the solution for all five US benchmarks-both unimodal and separable-in all runs. But the TLBO, DE, and GSA failed to find the global minimum in 2, 2, and one functions, respectively. e DE and SAR maintained almost the same performance on 8 UN functions, and they found the global minimum point on all runs except for three functions (f8, f10, and f13) that the performance of SAR was better than that of DE. TLBO had a performance near to that of these two algorithms. But it failed to find the minimum value in f11 in addition to these three functions. Both ABC and GSA failed to find the minimum value for any UN functions and had less convergence rate for these functions compared to the other algorithms. SAR had the best performance on MS functions and was able to find the value of global minima in four of six functions. Like the other algorithms, it failed to find the global minima in f18 and f19. Among the other four algorithms, ABC had less mean of errors. On 8 MN functions, SAR performed better than the other algorithms, and it only failed to find the global minimum in f22, but it had less errors than the others.
In Table 6, SAR was compared with the other algorithms by the two-sided Wilcoxon signed-rank test. In the last row of this table, the superiority of the first algorithm (SAR) over the second one has been shown by the sign (+). Equal superiority is shown with the sign (�). Finally, the superiority of the second algorithm over the first one is shown with the sign (− ). According to the data in Table 6, SAR outperformed ABC in 13 functions, while they had similar performance in 13 functions. ABC outperformed SAR only in f13. TLBO and GSA were not better than SAR in any functions. DE outperformed SAR in f8, while SAR outperformed it in 9 functions, and they had similar performance in 17 functions. Accordingly, SAR outperformed ABC, DE, GSA, and TLBO in the classic benchmark functions.

Global Search Ability.
In the second test, the ability of the algorithms to find the global minimum has been investigated, and for this purpose, the maximum number of function evaluations (NFE) was increased. In Table 7, the success percentages of the algorithms in finding the global minimum for 27 benchmark functions along with the average percentages are shown for 4 benchmark function types (unimodal, multimodal, separable, and nonseparable functions). e least NFE among 5 algorithms is highlighted in bold for each function. ABC was not able to find the minimum value of UN functions for any runs except for f11. is fact indicates the slower convergence rate of this algorithm to solve these types of optimization problems. Moreover, as it is clear in Table 7, ABC had the lowest success rate for such problems among all the algorithms, while it performed  Table 7, in addition to the success percentages, the average numbers of function evaluations to reach stopping conditions are presented for the compared algorithms. Based on this table, SAR had the fastest convergence rate for 13 functions among all studied algorithms. In Figure 3, the convergence curves of SAR for some of the benchmark functions are shown. ese functions include unimodal and separable (f2, f3, and f4), unimodal and nonseparable (f6, f7, and f10), multimodal and separable (f15, f17, and f19), and multimodal and nonseparable (f20, f22, and f25) functions. ese curves were obtained by averaging 51 independent runs for all the algorithms.

Convergence Rate Analysis. In
Also, the Wilcoxon signed-rank test has been used to more accurately check the convergence rate of the algorithms. In this test, the maximum numbers of function evaluations of SAR are compared with those of the other algorithms in 51 runs by the two-sided Wilcoxon signedrank test (α � 0.05), and the results are presented in Table 8. e comparison procedure is similar to that in Table 6. 1 + for h indicates the first algorithm has a faster convergence rate than the second algorithm.
According to the data in Table 8, SAR had a faster convergence rate than ABC in 26 functions, while ABC had a faster convergence rate than SAR only in f1. erefore, it can be concluded that SAR has a faster convergence rate than ABC. Also, the convergence rate of SAR was faster than that of GSA in all 27 functions. e convergence rate of SAR was faster than that of DE in 10 functions, while it had lower convergence rate than DE in 13 functions, and they had equal convergence rate in 4 functions. In the unimodal functions, the convergence rate of DE was faster than that of SAR. But SAR had a faster convergence rate in the multimodal functions. In comparison with TLBO, the convergence rate of SAR was faster, equal, and lower in 15, 4, and 11 functions, respectively.   According to the data in Table 8 and Figure 3, the convergence rate of SAR was significantly higher than that of GSA and ABC. Also, SAR had a faster convergence rate compared to TLBO in more than half of the functions. e convergence rate of DE was almost equal to that of SAR. But SAR had a faster convergence rate than DE for multimodal functions.

Performance of SAR on CEC 2013 Benchmark Function
Set.
e mean of errors obtained by ABC, CMA-RIS, AMopGA, GWO, YYPO, RJADE, SaDE, f k -PSO, SPSO, and SAR algorithms for the third test is reported in Tables 9-11. e solutions less than 10 − 8 are considered 0. Moreover, by comparing these 10 algorithms, the rank of each algorithm is determined for each function and is provided in these tables. e lowest rank (1) is related to an algorithm with the lowest mean of errors compared to the other algorithms. e best mean of errors is highlighted in bold in these tables.
In Table 9, the results obtained by the algorithms for unimodal functions are shown. ese functions are very suitable for comparing the ability of the algorithms in  exploitation. As it is clear from Table 9, SAR and CMA-RIS were able to find 100% of the minimum point for all runs in 4 cases of these 5 functions and are better than the other algorithms. e mean error of CMA-RIS is lower than that of SAR on the C3 function. erefore, CMA-RIS is the best algorithm for unimodal functions. e mean ranks and the overall ranks of them are shown in Table 12. Regarding the data in this table, after CMA-RIS, SAR is the second best algorithm among 10 compared algorithms for CEC 2013 unimodal functions. e DE variants are the third best algorithms, and there is a high difference between mean ranks of them (mean rank: 2.8) and those of SAR and CMA-RIS (mean rank: 1.2 and 1.4, respectively). Hence, SAR is efficient for unimodal functions, and it has high exploitation capability. Table 10 illustrates the results of the algorithms for basic multimodal functions. ese functions are very suitable for comparing the ability of the algorithms in exploration and finding the global optimum. SaDE and AMopGA found the best results for 4 of 15 basic multimodal functions. According to the data in Table 12, SaDE was the first best algorithm with mean rank 3.4 and SAR was ranked the second with mean rank 3.53. e performance of SaDE is slightly better than that of SAR. After them, f k -PSO with  mean rank 3.93 was the third best algorithm among the 10 algorithms that have been compared for basic multimodal functions. ese results confirmed a higher exploration ability of SAR in comparison with the other algorithms. e results obtained by the 10 algorithms compared on 8 composition functions are shown in Table 11. Among all the compared algorithms, SAR obtained the best results for C22 and C26 functions and ABC for C21, C25, and C28 functions. From Table 12, it can be seen that SAR with mean rank 3.25 had the best performance compared to the other algorithms on the composition functions. After SAR, ABC was the second best algorithm with mean rank 3.88 and CMA-RIS was located at the next rank with mean rank 4 among the 10 algorithms.
e results show that SAR outperformed the compared algorithms for these kinds of optimization problems.
In Table 12, means and overall ranks of the compared algorithms on unimodal, multimodal, and composition benchmark functions of CEC 2013 are reported. Furthermore, means and overall ranks of these algorithms on all the benchmark functions are presented in this table. It can be seen that the mean rank of SAR is the lowest, and it is the best algorithm among all the studied algorithms on the CEC 2013 benchmark function set. After SAR, the performance of SaDE is better than that of the others. It slightly outperformed CMA-RIS. RJADE was the fourth best algorithm in this test. e performance of GWO is the worst for all kinds of benchmark functions, and it was ranked 10th among the 10 studied algorithms.
In Tables 9-12, the algorithms were compared and ranked in the group. In Table 13, the mean of errors obtained by each of these 9 algorithms was compared with that of SAR.
According to the data in Table 13, SAR was significantly better than ABC, GWO, YYPO, AMopGA, RJADE, and SPSO. After SAR, SaDE, CMA-RIS, and RJADE have achieved the best performance, respectively. SaDE and CMA-RIS had less mean of errors than SAR in 9 functions, while SAR had less errors than them in 15 functions, and the mean of errors of SAR and them was equal in 4 functions.

Application of SAR on Engineering Optimization
Problems. SAR was applied on three engineering design problems, and the obtained results were compared with those of other algorithms documented in the literature. ese problems have been defined in the previous section. SAR and all the compared algorithms satisfied the constraints of these problems. In the following tables, "Std.," "NFE," and "NA" mean standard deviation, number of function evaluations, and not available, respectively, and the best results are highlighted in bold.

I-Beam Design.
is problem was solved using several other methods including cuckoo search (CS) [61], adaptive response surface method (ARSM) [62], improved ARSM (IARSM) [62], and symbiotic organisms search (SOS) [63]. e results obtained by SAR and the mentioned algorithms are presented in Table 14. e optimal designs found by SAR and SOS are the same and better than those by the others. e average and standard deviation of SAR are slightly better than those of SOS. e convergence curve of SAR for the I-beam design problem is shown in Figure 4. Table 15 compares the optimal designs found by SAR and the other algorithms including the method of moving asymptotes (MMA) [64], generalized convex approximation I and II (GCA(I) and GCA(II)) [64], symbiotic organisms search (SOS) [63], cuckoo search (CS) [61], flower pollination algorithm (FPA) [65], and modified firefly algorithm (MFA) [66]. According to the data in this table, the lowest cantilever beam weight was found by SAR. Moreover, SAR only required 10000 function evaluations to find the optimum design, and it is less than that of the others. Hence, SAR had the fastest convergence rate among all the compared algorithms. Also, the average and standard deviation of SAR are better than those of SOS. e convergence curve of SAR for the cantilever beam design problem is shown in Figure 5.
Regarding the data in this table, SAR found the best design (545.0365 lb) and required less structural analyses than the other algorithms. e average and standard deviation of SAR are 545.0391 lb and 0.0064 lb, respectively, and they are significantly better than those of the other algorithms.
ese results clearly indicate that SAR outperforms the other algorithms. Figure 6 shows the convergence curve of SAR for the spatial 25-bar truss design problem.

Analysis of SAR Parameters.
In this section, the effect of SAR parameters on solving the benchmark problems was investigated. Two classic (f13 and f18) and four modern (C3, C10, C22, and C28) benchmark functions were utilized for this test. f13 and C3 are unimodal functions, and f18, C10, C22, and C28 are multimodal functions. For classic and modern functions, NFEs were set to 2 × 10 4 × D and 10 5 , respectively. e population of SAR was equal to 20, and it    was independently executed 20 times for each configuration, and errors less than 10 − 8 were considered zero. e means of errors obtained by SAR for different values of the SE are reported in Table 17. To evaluate the effect of the SE on the performance of SAR, the MU was set to a big value because the unimproved clues will not be left for this value of MU. Hence, the effect of the MU was removed. According to the data in Table 17, the optimum value of the SE depends on the nature of the problem. e high values of the SE (more than 0.8) lead to unsatisfactory performance of SAR. Furthermore, SE values between 0.6 and 0.8 are suitable for unimodal functions and SE values lower than 0.1 are suitable for multimodal functions. Solving multimodal functions generally is more difficult than unimodal functions. erefore, the SE was set to 0.05 for all the studied problems in this paper. Table 18 shows the means of errors obtained by SAR with fixed SE � 0.05 for different values of the MU. From this table, it can be seen that the lower value of this parameter decreases the convergence rate of the algorithm and increases the chance of avoiding local minima. According to the results, the best value of the MU for the studied problems    is 70 × D. However, SAR is not very sensitive to this parameter. e results indicate the effect of the SE on the performance of SAR is more than the effect of the MU. us, the SE is the key parameter of the proposed algorithm.

Conclusion
A new metaheuristic optimization algorithm called the search and rescue optimization algorithm (SAR) was introduced here for solving single-objective optimization problems. SAR was inspired by the explorations carried out by humans during search and rescue operations. e proposed algorithm consists of two phases including the social phase and the individual phase, and the implementation of it is relatively simple. e results of the tests done in this paper have shown that combining these two phases along with the use of memory leads to a balance between exploration and exploitation processes in SAR. e performance of SAR was compared with that of twelve different optimization algorithms through fifty-five   continuous benchmark problems including a set of 27 classic benchmark problems and a set of 28 modern CEC 2013 benchmark functions. e compared algorithms included recent variants of DE (SaDE and RJADE), GA (AMopGA), PSO (SPSO and f k -PSO), and CMA-ES (CMA-RIS) algorithms and some recent metaheuristic algorithms (ABC, GSA, TLBO, GWO, and YYPO). e Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. e statistical results indicated SAR is suitable for global optimization and highly competitive with the other algorithms. e proposed algorithm performed better than most of the compared algorithms in terms of finding the global optimum and convergence rate in many cases. Besides, to verify the application of the proposed algorithm on real-world optimization problems, SAR was tested on three engineering design problems including the I-beam design, the cantilever beam, and the spatial 25-bar truss structure design. e obtained results revealed that the proposed algorithm can find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. In future works, the performance of SAR on other types of optimization problems such as combinatorial and large-scale optimization problems will be investigated.
Data Availability e data supporting this study are from previously reported studies and datasets, which have been properly cited in this paper. All problems were completely defined in this paper, except CEC 2013 instances. e codes of CEC 2013 benchmark instances and the algorithms implemented for this study were downloaded from the following links: e CEC 2013 benchmark data used to support the findings of this study have been deposited in the CEC2013matlab repository at http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/ Shared%20Documents/CEC2013/ce_c13matlab.zip. e MATLAB source code of SAR used to support the findings of this study are available from Amir Shabani upon request (email: amir48ash@gmail.com). e algorithms used to compare the proposed algorithm are available from the following links: ABC: http://mf.erciyes.edu.tr/abc/; DE: https://ch.mathworks.com/matlabcentral/fileexchange/52897differentialevolution-de; GSA: http://www.mathworks.com/ matlabcentral/fileexchange/27756-gravitationalsearch-algorithmgsa; and TLBO: https://ch.mathworks.com/matlabcentral/ fileexchange/52863-teaching-learningbased-optimization-tlbo.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.