Social Network Search for Solving Engineering Optimization Problems

In this paper, a new metaheuristic optimization algorithm, called social network search (SNS), is employed for solving mixed continuous/discrete engineering optimization problems. The SNS algorithm mimics the social network user's efforts to gain more popularity by modeling the decision moods in expressing their opinions. Four decision moods, including imitation, conversation, disputation, and innovation, are real-world behaviors of users in social networks. These moods are used as optimization operators that model how users are affected and motivated to share their new views. The SNS algorithm was verified with 14 benchmark engineering optimization problems and one real application in the field of remote sensing. The performance of the proposed method is compared with various algorithms to show its effectiveness over other well-known optimizers in terms of computational cost and accuracy. In most cases, the optimal solutions achieved by the SNS are better than the best solution obtained by the existing methods.


Introduction
Optimization is a part of the nature of human works, in which almost all of the human decisions go through an optimal process [1]. Optimization is embedded in the essence of the many branches of science, for example, a system with minimal energy in physics, the maximum profit in business, survival of the best organism in biology, and designing an engineering system that satisfies a set of constraints [2,3]. Almost all of the engineering problems contain several nonlinear and complex constraints depending on the design criteria and safety rules.
Over the last decades, various types of methods have been developed to solve constrained engineering problems. Two well-known groups of these methods are mathematical and metaheuristic methods. e idea of mathematical methods can be attributed to the development of the calculus of variations [4]. ese methods employ the gradient of the objective function and constraints of the problem to find the optimal solution. e results of these methods are exact. However, these approaches search in a space near the starting point, which makes them sensitive to the initial starting point. In other words, just a correct starting point can lead to the global optima. In dealing with complex optimization problems, these methods are not suitable and frequently reach local solutions, and in some real applications, the gradient of the objective function and constraints is impossible to be calculated [5].
ese drawbacks encourage researchers towards metaheuristic methods. Metaheuristic methods try to combine basic heuristic methods with randomization and rule-based theories, which are usually taken from natural phenomena such as evolution, swarm intelligence, and governing laws in different physics theories. Metaheuristic algorithms are approximate, but their results have high accuracy and are very close to the global optimum solution [6]. ese methods are problemindependent, and the starting point does not determine the quality of the final solutions. Besides, these methods employ different operators to perform a global search in the space of the problem at an appropriate speed. ese features have made them popular in recent decades. Also, these types of algorithms are among the most popular techniques that are employed for solving optimization problems in different fields, such as computer and electrical engineering [7], water, geotechnical and transport engineering [8], structure and infrastructures engineering [9], robotic [10], project and construction management [11], feature selection and data mining [12,13], industrial and manufacturing [14], and medicine and biology [15].
Glover [16] introduced the term metaheuristic firstly. e word metaheuristic is a combination of two old Greek words: meta and heuristic. e word heuristic has its origin in the old Greek work heuriskein, which means the art of discovering new strategies (rules) to solve problems. e suffix meta also is a Greek word that means "upper-level methodology" [17]. Almost every metaheuristic algorithm follows the general process shown in Figure 1. Algorithm steps cause fundamental differences in the performance of algorithms when faced with different problems. In the other words, algorithm steps represent the unique operators of each algorithm in which new solutions are generated. e operators of each algorithm refer to the optimal process of a particular phenomenon that those algorithms have imitated.
According to the type of basic phenomena of each method, metaheuristic algorithms can be classified into four main categories: (1) evolutionary, (2) swarm intelligence, (3) physics-based, and (4) human-based algorithms. Evolutionary algorithms are motivated by natural evolution. Swarm intelligence algorithms model the natural behavior of animals in teamwork such as foraging and hunting. Physical phenomena and laws of science inspire physics-based algorithms, and finally, human-based algorithms mimic various optimal behaviors of humans in different conditions. Some of the most popular and novel algorithms are presented in Table 1.
Each of these algorithms can behave differently when dealing with different problems, so that one particular algorithm may not solve some problems. erefore, it is necessary to create a new high-performance optimization algorithm that is able to solve more types of problems. Novel metaheuristic methods are developed to find the optimal solution for complex and large-scale problems in less time than previous ones, with higher accuracy. ese aims are satisfied by developing more robust algorithms that have a better ability to search the space of problems to find a better solution. In addition, this property arises from the right balance between exploration and exploitation of the proposed algorithm. Exploitation means searching around the current best solutions, while exploration tries to explore the search space more efficiently, often by randomization [42].
In addition to inventing novel algorithms based on natural phenomena, developing new algorithms using hybridizing the operators of the current methods or modifying them is a hot topic in the field of metaheuristic algorithms. Firefly algorithm with chaos [43], hybrid particle swarm optimizer, ant colony strategy and harmony search scheme (HPSACO) [44], island-based cuckoo search with highly disruptive polynomial mutation (iCSPM) [45], quantumbehaved developed swarm optimizer (QDSO) [46], hybrid self-assembly with particle swarm optimization (SAPSO) [47], upgraded whale optimization algorithm (UWOA) [48], fuzzy controllers with slime mould algorithm (SMAF) [49], and hybrid invasive weed optimization-shuffled frog-leaping (SFLA-IWO) [50,51] are some the newly developed hybrid or modified optimization algorithms.
Social network search (SNS) algorithm is a robust metaheuristic algorithm that was innovated as a novel method for solving optimization problems, and its results showed that it is capable of outperforming various methods in dealing with different optimization problems [42]. e SNS algorithm simulates human behavior as users of a social network. Social network users can influence the opinions of other users on the network by sharing their views, opinions, and thoughts. Each of the users can also share their thoughts on the network and affect other people's opinions. In other words, the SNS simulates particular moods where the views and opinions of users are influenced under their communications.
is paper investigates the performance of the SNS algorithm using 14 constrained engineering optimization problems and a real application in the field of satellite image segmentation. e Table 1: List of some popular and new metaheuristic algorithms.

Algorithm
References Evolution strategy (ES) [18] Genetic algorithms (GA) [19] Ant colony optimization (ACO) [20] Particle swarm optimization (PSO) [21] Differential evolution (DE) [ obtained results are compared with other optimizers in terms of best function value and number of function evaluations, and in most cases, the solutions of the SNS are better than the other methods. e rest of this paper is organized as follows. Section 2 describes the SNS algorithm and constraint-handling technique. e performance of the SNS algorithm in solving optimization problems is evaluated against other methods in Section 3. Finally, conclusions are given in Section 4.

Materials and Methods
is section presents the general framework of the SNS algorithm and the utilized constraint-handling technique for solving engineering optimization problems. (SNS). Human beings are social species, which always try to communicate with each other. Social networks are virtual tools that were created for this goal with the advent of technology. e proposed SNS algorithm simulates the interactive behavior among users in social networks to achieve more popularity. Social networks are platforms where users can interact virtually with other users. Interacting with other users of the network may affect their opinions. e process of interacting with and influencing other users of the network goes through an optimal process so that users are always trying to increase their level of popularity on the network. e main property of social networks is that users can follow other persons, as shown in Figure 2. If a user shares a new post, that person's followers may be informed about the shared topic. is feature (fast propagation of views) has turned networks into a powerful tool for promoting information and ideas, which is due to having high connectivity of users in the social networks, as demonstrated in Figure 2.

Social Network Search
In social networks, users' viewpoint can be affected by other views in different moods containing imitation, conversation, disputation, and innovation. One of these moods that look like real-world social behavior creates the new solution in the SNS algorithm. Description and mathematical modeling of these moods are as follows [42].
2.1.1. Mood 1: Imitation. Imitation means that the views of other users are attractive, and usually, users try to imitate each other in expressing their opinions as follows: where X j represents the vector of the jth user's view, which is selected randomly (i ≠ j), X i is the view vector of the ith user, and rand(−1, 1) and rand (0, 1) are two random vectors in intervals [−1, 1] and [0, 1], respectively. In this mood, the new solution will be generated according to imitation space (Figure 3(a)), and this space is created using the radii of shock and popularity. e shock radius (R) reflects the amount of influence of the jth user, and its magnitude is considered as a multiple of r. e value of r shows the popularity radius of the jth user, which is calculated based on the difference in the opinions of the ith and jth users. Also, the final effect of the shock radius is reflected by multiplying its value to a random vector in the interval of [−1, 1], in which if the components of the random vector are positive, the shared view will be agreed with the jth opinion and vice versa. e process of the imitation mood is illustrated in Figure 3(a). As can be seen, using equation (1), the space of imitation will be formed, and then a point as a new view will be shared on the network.

Mood 2: Conversation.
In social networks, users can communicate with each other and benefit from their conversation about different issues according to where rand (0, 1) is a random vector in the interval [0, 1], X j and X k are the vectors of two randomly selected positions somehow i ≠ j ≠ k, and f i and f j are the objective functions of X i and X j , respectively.
is mood models a state in which users learn from each other and increase their information about events. In conversation, users find a sight about a specific issue through other views, and finally, due to the differences in opinions, they can draw a new vision of the issue under discussion. X k demonstrates the vector of the issue, which is randomly chosen to speak about it; also, R is the effect of chat, which is based on the differences of opinion and represents the change in their beliefs about the Computational Intelligence and Neuroscience issue. D is the difference between the views of users. In addition, sign(.) is the sign function and sign(f i − f j ) determines the moving direction of X k by comparing f i and f j . e process of this decision mood is shown in Figure 3(b). As can be seen, the user's view about the issue changes as a result of conversations with the jth user. e changed opinion is considered as a new view to share with others. Changing the user's view about the events is considered as the relocation of the events.
Selected randomly views from other persons and may be influenced by the expressed reasons. e new affected view in disputation is as follows: where rand (0, 1) is a random vector in the interval [0, 1] and N r is a random integer between 1 and N user (N user is the population size or network size). N r determines the number of users who participate in disputation, and participants are selected randomly. AF is the admission factor, which indicates the insistence from users on their opinion in discussions with other persons and is a random integer that can be either 1 or 2. round(.) is a function that rounds its input to the nearest integer number. e process of disputation mood is shown in Figure 3(c).

Mood 4: Innovation.
Sometimes a topic that users share on the networks comes from their new experiences and thoughts. In this mood, the new solution is developed by changing a randomly selected variable of X i as follows: where d is the dth variable that is selected randomly in the interval [1, D], D is the number of problem variables, and rand 1 and rand 2 are two random numbers in the interval [0, 1]. Also, ub d and lb d are maximum and minimum values for the dth variable. n d new represents the new idea about the selected dimension. x d j is the current idea about the dth variable presented by another user (jth user which selected randomly and i ≠ j) that ith user wants to change it because of the new idea (n d new ). Innovation models a state in which a person thinks about a specific issue, perhaps looks at that issue in a novel way, and is able to understand the nature of that problem more accurately or can find a completely different view about it. A particular subject may have distinct features, and each of them affects the understanding of the problem. As a result, by changing the idea about one of them (x d i ), the general concept of the subject will change, and a novel view will be achieved. x d inew is a new insight into the issue under consideration from the dth viewpoint and is replaced with the current view (x d i ). e outline of the construction of the new view is shown in Figure 3(d).
In the SNS algorithm, only one of the predefined four models, so-called decision moods, will be selected and executed randomly for each user in each iteration of the algorithm. In other words, all of the moods described here are real-world behaviors of users in social networks, and it seems that the correct assumption is that only one of these moods occurs at a specific time (iteration) for each users. As a result, the chance of occurrence of these moods is considered to be small by using a random procedure with a uniform distribution.
An important point is that the SNS algorithm has no specific parameter to be fine-tuned, and this feature is one of its superiority. In the third mood of the SNS algorithm, AF is defined as a random integer, and it can be considered a deterministic parameter whose value is generated randomly. To utilize the SNS algorithm, it just needs to determine the number of users (population size) and the maximum number of evaluations or iterations. e flowchart of the SNS algorithm is illustrated in Figure 4. Besides, the MATLAB code of the SNS algorithm for solving engineering optimization problems is available in [52].

Constraint-Handling Technique.
Most engineering optimization problems aim to find optimal solutions under special conditions, which are usually based on resource limitations, design principles, and safety requirements. ese special conditions are called constraints, and the main aim of constraint optimization is to find a feasible solution.
A constrained optimization problems can be formulated as minimize: subject to: where the function f(X) is an objective function, X is a vector of solution variables (and they can be continuous, discrete, or mixed), g i (X) and h j (X) are inequality and equality constraints, respectively, in which n g is the number of inequality constraints and n h is the number of equality constraints, d represents the number of variables, and lb k and ub k are the minimum and the maximum permissible values for the kth variable, respectively. e point worth mentioning is that a feasible solution satisfies all constraints. In contrast, infeasible solutions do not satisfy at least one constraint [53]. Also, in dealing with equality constraints, h j (X) � 0 is replaced with an inequality |h j (X)| − δ ≤ 0, where δ is a positive tolerance value. Another approach for handling the equality constraints is to replace h j (X) � 0 with two inequality constraints of the type h j (X) ≤ 0 and h j (X) ≥ 0. is strategy facilitates convergence to the optimum design [54]. erefore, all of the constraints can be transformed into inequality constraints.
Metaheuristic algorithms cannot solve constraint optimization problems, directly. erefore, it needs to equip them with an additional tool for handling constraints. A group of methods was developed for this proposal which is called constraint-handling techniques (CHTs).
e CHTs enable the optimization methods to handle the objective function and constraints, simultaneously. CHTs are grouped Computational Intelligence and Neuroscience 5 into five categories: (1) penalty functions, (2) special representations and operators, (3) repair algorithms, (4) separation of objectives and constraints, and (5) hybrid methods [53]. e first method, penalty functions, is a simple and standard procedure for handling constraints. In the penalty function approach, a penalty term is added to the objective function, and then a constrained optimization problem is transformed into an unconstrained one. A penalty function can be formulated as follows: where F(X) is the fitness function, which expresses the unconstrained state of the constrained problem, f(X) is the objective function, and P(X) is the penalty term that denotes the violation of constraints and is calculated as follows: Step 2. Evaluation

No
Yes Iter = Iter + 1 Step Step 3. Evaluation the new view

Yes
Step 2. Limit the new view Step 1. Select a Decision Mood randomly where max(0, g i (X)) and max(0, | h j (X)| − δ) represent the value of the violations of the solutions according to the ith inequality and jth equality constraints, respectively. Also, α i and β j are penalty factors for these constraints, respectively. e magnitude of penalty factors affects the quality of answers, and the suitable penalty factors are problemdependent.
To solving constrained optimization problems, metaheuristic methods and CHT should be liked for recognizing feasible search space. en, the optimizer should try to find the optimum or a near-optimum solution in the feasible region.
erefore, in each iteration of an algorithm, the fitness of the population is evaluated according to objective and constraint(s), and based on the calculated fitness function, the next generation of the population will be generated. In other words, the algorithm will identify the problem's search space using the fitness of the current population.

Results and Discussion
is section evaluates the performance of the SNS algorithm using 15 benchmark problems in various fields of engineering. One of these problems deals with the segmentation of satellite images in the field of remote sensing as a real application of metaheuristic algorithms in engineering. Each of these examples was run 30 times independently using the SNS algorithm, and the results are compared with different counterpart algorithms from the literature. In selecting the counterpart algorithm, an attempt has been made to use the results of newly developed methods.

Cantilever Beam.
is problem is a structural engineering design example that is related to the weight optimization of a cantilever beam with a square cross section [55]. e beam is rigidly supported at one end, and a vertical force acts on the free node of the cantilever, as shown in Figure 5. e beam consists of five hollow square blocks with constant thickness, whose heights (or widths) are the decision variables, and the thickness is held fixed (here 2/3). is problem can be expressed analytically as follows: minimize: e best solutions for solving this problem obtained by the SNS and various methods are listed in Table 2. It can be seen that the solution obtained by the SNS is better than that of the other methods. In addition, the SNS terminated after 12,000 evaluations. e statistical results of the SNS and other methods are listed in Table 3, and based on them, it can be seen that the SNS algorithm has obtained a more accurate answer in a smaller number of function evaluations (NFEs).

Optimal Design of I-Shaped Beam.
e other typical engineering optimization problem is the I-beam design problem, which aims to minimize the vertical deflection of the beam shown in Figure 6. It simultaneously satisfies the cross-sectional area and stress constraints under given loads. e width of flange b(� x 1 ), the height of section h(� x 2 ), the thickness of the web t w (� x 3 ), and the thickness of the flange t f (� x 4 ) are variables of this problem. e maximum vertical deflection of the beam is f(x) � PL 3 /48EI when the length of the beam (L) and modulus of elasticity (E) are 5200 cm and 523.104 kN/cm 2 , respectively. e objective function and constraints of this problem are formulated as follows: x i Computational Intelligence and Neuroscience 7 minimize: f(X) � 5000 variable range:

Algorithm
Variables Constraint x 4 (= t f ) Figure 6: A 3D view of beam design problem.

Computational Intelligence and Neuroscience
Many optimizers have solved this nonlinearly constrained problem, and Table 4 presents the best results of these methods. In addition, the statistical results for comparing the performance of the SNS methods are provided in Table 5. For this case study, the SNS needs 3600 function evaluations to reach these results, and it can be seen that the SNS performs superior compared to other methods.

ree-Bar Truss Design Problem.
is case considers a 3bar planar truss structure shown in Figure 7. e volume of a statically loaded 3-bar truss is to be minimized subject to stress (σ) constraints on each of the truss members. e objective is to evaluate the optimal cross-sectional areas, . e mathematical formulation is given below: minimize: subject to: variable range: e best results of different methods are presented in Table 6. Also, Table 7 provides the statistical results of these algorithms. It can be seen that the best objective value of the SNS is equal or better than that of other methods. e required number of function evaluations (NFEs) for the SNS algorithm is 4800, which is much lower than that of other algorithms.

Tubular Column Design.
is problem is an example of designing a uniform column of the tubular section to carry a compressive load at minimum cost. is problem has two design variables, the mean diameter of the column d(� x 1 ) and the thickness of tube t(� x 2 ), which are shown in Figure 8. is column is made of a material with a yield stress of σ y � 500 kgf/cm 2 and a modulus of elasticity of E � 0.85 × 10 6 6 kgf/cm 2 . e optimization model of this problem is given as follows: minimize: subject to: According to the constraints g 1 and g 2 , the included stress in the column should be less than the buckling and yield stresses, respectively. Also, other constraints (g 3 , g 4 , g 5 , and g 6 ) clamp the variables of the problem to the ranges of the variables. is problem was previously solved using various methods, and the best results of these methods and SNS are presented in Table 8. e SNS uses 1250 evaluations to solve this problem. In addition, the statistical results of some methods are reported in Table 9. According to these results, the SNS has found better results than other algorithms.

Speed Reducer Design.
In mechanical systems, one of the essential parts of the gearbox is the speed reducer, and it can be employed for several applications [65]. In this optimization problem (see Figure 9), the weight of the speed reducer is to be minimized with subject to 11 constraints. is problem has seven variables, face width b(� x 1 ), module of teeth m(� x 2 ), the number of teeth in the pinion z(� x 3 ), length of the first shaft between bearings l 1 (� x 4 ), length of the second shaft between bearings l 2 (� x 5 ), the diameter of first shafts d 1 (� x 6 ), and the diameter of second shafts d 2 (� x 7 ). e mathematical formulation of this problem is formulated as follows: Table 4: Best results for the optimal design of I-shaped beam problem.

Algorithm
Variables Constraints   Table 6: Best results of the three-bar truss design problem.

Algorithm
Variables Constraints

Algorithm
Variables Constraints subject to: Figure 9: A schematic representation of speed reducer. g 10 (X) � 1.5x 6 + 1.9 is engineering problem has 11 constraints, seven nonlinear constraints and four linear inequality constraints, which are considered based on (1) bending stress of the gear teeth, (2) surface stress, (3) transverse deflections of the shafts, and (4) stresses in the shafts. e comparison of the best optimal solution with various optimization methods is given in Table 10. e SNS method requires 3750 evaluations to find its solution. e statistical results of SNS and ten optimization methods are compared in Table 11. Among the compared optimization algorithms, the SNS has the lowest number of function evaluations while its results are better than those of the other methods.

Piston Lever.
e main objective of this problem is to locate the piston components, H(� x 1 ), B(� x 2 ), D(� x 3 ), and X(� x 4 ), by minimizing the oil volume when the lever of the piston is lifted up from 0°t o 45°, as shown in Figure 10. e formulation of this problem is given as follows: minimize: subject to: ese inequality constraints consider the force equilibrium, the maximum bending moment of the lever, minimum piston stroke, and geometrical conditions. e best solutions obtained by SNS and some of the other algorithms are presented in Table 12. In addition, the performance of the PSO [71], DE [71], GA [71], hybrid particle swarm optimization (HPSO) [71], HPSO with Q-learning [71], CS [18], ISA [63], CGO [37], MGA [40], AOS [39], and SNS is summarized in Table 13.
e SNS algorithm obtains its results after 5000 evaluations, and its results are far better than those of other methods.   14 Computational Intelligence and Neuroscience

Corrugated Bulkhead Design.
is problem aims to minimize the weight of a corrugated bulkhead in a chemical tanker [72], in which the design variables are the width (x 1 ), depth (x 2 ), length (x 3 ), and plate thickness (x 4 ). e mathematical model of this optimization problem is given as follows: minimize: subject to: variable range: Tables 14 and 15 compare the best and statistical results of SNS and other optimizers, respectively. According to these results, the SNS significantly improves the solution quality of other algorithms. In addition, the SNS method solves this problem after 3125 evaluations that is the lowest value among other methods.

Design of Pressure Vessel.
A cylindrical vessel is capped at both ends by hemispherical heads, as shown in Figure 11. e objective is to minimize the total cost, including the cost of material, forming, and welding. is problem has four variables including the thickness of the shell T s (� x 1 ), the thickness of the head T h (� x 2 ), the inner radius R � (x 3 ), and the length of the cylindrical section of the vessel, not including the head L(� x 4 ). In addition, x 1 and x 2 are integer multiples of 0.0625 in, while the other variables are continuous. e optimization problem can be expressed as follows: subject to: variable range: is problem has been used to evaluate the performance of many algorithms. Tables 16 and 17 compare the best and statistical results of SNS and other algorithms, respectively. e SNS needs 6000 NFEs for solving this problem that is much lower than that of other algorithms.

Design of Tension/Compression Spring.
e tension/ compression spring design problem is described in [81] for which the objective is to minimize the weight of a tension/ compression spring, as shown in Figure 12. is problem is subject to constraints on minimum deflection, shear stress, surge frequency, limits on the outside diameter, and design variables. e design variables are the mean coil diameter D (� x 1 ), the wire diameter d (� x 2 ), and the number of active coils N(� x 3 ). e problem can be stated as minimize: subject to:   Table 18 compares the SNS with many optimization algorithms in terms of best optimization results, and Table 19 presents the statistical results of these algorithms. e SNS algorithm solves this problem in 9000 evaluations, and among the compared methods, just WOA [33] and MCEO [80] used a fewer number of evaluations, while their results are not as good as SNS.

Design of Welded Beam.
is benchmark problem was introduced by Coello [77] and has been tackled by many researchers. As illustrated in Figure 13, the beam is under a vertical force. e goal of this problem is to find the minimum manufacturing cost of the welded beam. e problem is subject to seven constraints of stress, deflection, welding, and geometry. e variables are weld thickness G-QPSO: Gaussian quantum-behaved particle swarm optimization. CPSO: co-evolutionary particle swarm optimization. CDE: co-evolutionary differential evolution. SAP: self-adaptive penalty approach. EO: equilibrium optimizer. MCEO: multilevel cross entropy optimizer.  Figure 11: Schematic view of pressure vessel design.

Algorithm
Variables Constraints   Figure 13: Schematic of the welded beam structure with indication of design variables. h(� x 1 ), height l(� x 2 ), length t(� x 3 ), and bar thickness b(� x 4 ), as shown in Figure 13. e objective function can be mathematically be stated as minimize: subject to: Tables 20 and 21 compare the best and statistical results of various optimizer in dealing with welded beam design problem. e SNS algorithm needs 9000 evaluations, which is the lowest NFE among other algorithms, while its results are better. In addition, the SNS algorithm has the lowest standard deviation that shows its robustness in solving this problem.

Design of Gear Train.
e gear train design problem is an unconstrained discrete design problem in mechanical engineering and was introduced by Sandgren [85]. e purpose of this benchmark task is to minimize the gear ratio defined as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft. e number of teeth of gears n A (� x 1 ), n B (� x 2 ), n C (� x 3 ), and n D (� x 4 ) are considered as the design variables, and Figure 14 illustrates the 3D model of this problem. e mathematical formulation is provided as follows: minimize: variable range: e best results of 19 algorithms include the SNS are presented in Table 22. It can be seen that all the algorithms find the optimum solution, except the PSO [62] and BBO [62]. In addition, the statistical results of 14 algorithms are compared in Table 23. e proposed method outperformed most of the other algorithms in terms of the mean, worst, SD, and NFEs.

A Reinforced Concrete Beam Design.
Amir and Hasegawa [91] presented a simplified optimization problem of designing a reinforced concrete beam, as shown in Figure 15. e beam is assumed to be simply supported with a span of 30 ft and subjected to a live load of 2000 lbf and a dead load Computational Intelligence and Neuroscience of 1000 lbf, including the weight of the beam. e concrete compressive strength (σ c ) is 5 ksi, and the yield stress of the reinforcing steel (σ y ) is 50 ksi. e cost of concrete is $0.02/ in2/linear ft, and the cost of steel is $1.0/in2/linear ft. To minimize the total cost of the structure, the area of the reinforcement A s (� x 1 ), the width of the beam b(� x 2 ), and subject to: It is clear that the variables x 1 and x 2 are discrete, while x 3 is continuous. e SNS method requires1000 evaluations to reach the optimum solution. Table 24 presents the results of optimum designs obtained by the SNS and other methods for this problem. In addition, the statistical results of FA [54], CS [18], AOS [39], and SNS are compared in Table 25. Obviously, the performance of the SNS method is better than other algorithms.

22
Computational Intelligence and Neuroscience g 8 (X) � 4.72 − 0.5x 4 − 0.19x 2 x 3 − 0.0122x 4 x 10 + 0.009325x 6 x 10 + 0.000191x 2 11 − 4 ≤ 0, g 9 (X) � 10.58 − 0.674x 1 x 2 − 1.95x 2 x 8 + 0.02054x 3 x 10 − 0.0198x 4 x 10 + 0.028x 6 x 10 − 9.9 ≤ 0, g 10 (X) � 16.45 − 0.489x 3 x 7 − 0.843x 5 x 6 + 0.0432x 9 x 10 − 0.0556x 9 x 11 − 0.000786x 2 11 − 15.7 ≤ 0, variable range: e design of car side impact is also used as a benchmark problem to evaluate the performance of various methods. e best results of the SNS and these algorithms are presented in Table 26. It should be noted that the results of other algorithms evaluated by Gandomi et al. [18,54,66] have different variables ranges, but in this paper, the variable ranges of [93] are utilized. Table 27 summarizes the statistical results obtained by the different optimization algorithms for the car side impact design problem. In this case, the SNS achieves its results with 20,000 NFEs. e CS  Computational Intelligence and Neuroscience method has better performance than all of the methods, according to the results presented in [18]. In addition, the best result of the SNS is better than those of other algorithms.

Cantilever Stepped Beam.
is problem is a good benchmark to verify the capability of the optimization methods for solving continuous, discrete, and mixed variable structural design problems.
is problem aims to minimize the volume of the beam. e width of segments (x 1 , x 2 , x 3 , x 4 , x 5 ) and height of them (x 6 , x 7 , x 8 , x 9 , x 10 ) are chosen to be the design variables. ese ten variables are illustrated in Figure 17. Except for bending stress constraints, a specified aspect ratio is imposed such that the ratio of height to width in the segments of the beam is limited to be less than 20. e problem is formulated as follows: minimize: subject to: e first five constraints are related to the bending stresses in each beam segment that must be lower than the allowable limit (σ d ). Also, the deflection of the cantilever beam tip must be smaller than the limit deflection (Δ max ). e aspect ratio between the height and width of the cross sections must be less than 20 and is applied by the last five constraints. Six of the variables (x 1 , x 2 , x 3 , x 6 , x 7 , x 8 ) are discrete, and the rest of them (x 4 , x 5 , x 9 , x 10 ) are continuous. e best and statistical results of the FA [54], thermal exchange optimization (TEO) [82], PSO [82], and SNS are presented in Tables 28 and 29, respectively. To solve this problem, the SNS needs 20,000 evaluations, which is lower than the NFEs of other compared methods, and at the same time, it outperforms all of them in terms of best, mean, worst, and SD.

Real Application of SNS in Remote Sensing (Segmentation of Satellite Images).
Image segmentation is an important topic in the field of remote sensing due to the increasing volume of collected images from satellites, airplanes, and x 5 x 4 x 3 x 2 x 1 x 9 x 8 x 7 x 6 x i+5 Figure 17: A 3D model of stepped cantilever beam.
Computational Intelligence and Neuroscience other platforms [95]. Image segmentation aims to par- where H 0 , H 1 , . . . , H m are the entropy values of m + 1 various sections or classes, p i is the probability of the pixel intensity, and N is the total number of distinct intensity levels. e utilized image and its histogram patterns are shown in Figure 18.
is satellite image is taken from Pléiades Satellite Imagery to carry out the experimental study for segmentation. It can be seen that the histogram of the satellite image has a multimodal pattern, and it is very difficult to segment such an image that possesses immense information content.
e experiment was carried out 10 times to choose the best of each algorithm. Figure 19 gives the segmented images for four different levels of thresholding (n) and compares the   Figure 19: Segmentation results of test images using metaheuristic algorithms for four different threshold levels.   Various criteria can be used for comparing the performance of metaheuristic algorithms in satellite image segmentation. Peak signal to noise ratio (PSNR) and feature similarity index (FSIM) are two quantitative performance metrics, which are utilized in this study [99]. PSNR measures the accuracy of the reconstructed image and is formulated as follows: where mn is the size of image and X and X ′ are the main and the processed images, respectively. In addition, FSIM is a criterion that calculates the similarity of the thresholded and original images as follows:  66 84 101 118 135 153 175 198 224 44 63 82 101 119 138 157 176 194 214 234 28 46 66 86 106 126 144 164 184 204 225  18 36 56 77 99 121 141 161 181 202 226 18 39 59 80 101 122 143 164 186 208 230 18 37 56 76 96 118 140 162 185 208     is comprehensive study demonstrates that the developed SNS has competence among the other metaheuristic algorithms. Based on the results in solving classical engineering problems, it can be concluded that the SNS algorithm can perform superior to other algorithms in dealing with semi-real constrained problems. In addition, the image segmentation problem results show the SNS algorithm's ability to solve real-world problems.

Conclusion
e social network search (SNS) is a newly developed metaheuristic algorithm that mimics the behavior of social network users in expressing their opinions. In the present study, the SNS algorithm was employed for solving 14 semireal constrained optimization problems and one real-world application in the field of remote sensing at a relatively low computational cost. From the comparative study, the SNS has shown its potential to handle various constrained optimization problems, and its performance is much better than other state-of-the-art algorithms in terms of the selected performance metrics. is is partly because there are no parameters to be fine-tuned in the SNS. In addition, it is worth mentioning that a simple penalty function method is used for constraint handling, while other compared methods maybe used advanced methods for this task.
is algorithm uses four moods of users in the social networks, namely, imitation, conversation, disputation, and innovation. Users are influenced to express their new views using these four moods simulated from real-world behaviors of users in social networks that randomly accrue for each of them. As further studies, different modifications can be employed to improve the performance of the SNS. Some of these editions are listed below: (i) In the course of iterations, each user is affected during a randomly selected mood. Developing this random selection to an adaptive selection may affect the performance. (ii) In the imitation mood, the new view is created inside the imitation space. A new model for this space can have a high impact. (iii) e shock radius (R) and popularity radius (r) are two important key parameters for improving the imitation mood output. (iv) In the imitation, conversation, and innovation moods, a random user (X j ) is selected. e selection of this user affects extremely the performance of the SNS. Another selection mechanism can be useful.
(v) e subject (X k ) in conversation mood has an effective impact on the quality of the newly generated solutions.
(vi) In conversation mood, the direction and size of movements are affected by sign(f i − f j ). e change of this factor in an adaptive way that affects the size of movements is desirable. (vii) In the disputation mood, a random number of users are considered. Different strategies can be integrated with this mood. For example, different neighborhood topologies can be used. In addition, dynamic regrouping schema can be useful to improve the performance of disputation mood. (viii) A new mood can be designed to improve the ability of the SNS by modeling another specific situation in social networks.
Hybridization of the proposed algorithm with other popular algorithms is a common way to benefit from the idea of different metaheuristics to develop a more robust optimization algorithm. In addition, the ability of this algorithm should be examined in dealing with other complex realworld optimization problems in different branches of science.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.